Test Report: KVM_Linux_crio 19758

                    
                      487a5cf556320fbeb648c9691968ff5b5aeb4ad7:2024-10-25:36805
                    
                

Test fail (10/326)

x
+
TestAddons/parallel/Ingress (153.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-413632 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-413632 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-413632 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a6f49bba-8fb4-4037-8ccc-f07fcab0a94d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a6f49bba-8fb4-4037-8ccc-f07fcab0a94d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005252453s
I1025 21:39:03.532861  669177 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-413632 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.188558929s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-413632 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.223
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-413632 -n addons-413632
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 logs -n 25: (1.246745647s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| delete  | -p download-only-941359                                                                     | download-only-941359 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| delete  | -p download-only-719988                                                                     | download-only-719988 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| delete  | -p download-only-941359                                                                     | download-only-941359 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-275962 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC |                     |
	|         | binary-mirror-275962                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41967                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-275962                                                                     | binary-mirror-275962 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| addons  | disable dashboard -p                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC |                     |
	|         | addons-413632                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC |                     |
	|         | addons-413632                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-413632 --wait=true                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | -p addons-413632                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-413632 ip                                                                            | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-413632 ssh cat                                                                       | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | /opt/local-path-provisioner/pvc-635e1fba-296d-4aed-ae47-8b59b1722843_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:39 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:39 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-413632 ssh curl -s                                                                   | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:39 UTC | 25 Oct 24 21:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:40 UTC | 25 Oct 24 21:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:40 UTC | 25 Oct 24 21:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-413632 ip                                                                            | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:41 UTC | 25 Oct 24 21:41 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 21:35:51
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:35:51.195159  669884 out.go:345] Setting OutFile to fd 1 ...
	I1025 21:35:51.195418  669884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:51.195428  669884 out.go:358] Setting ErrFile to fd 2...
	I1025 21:35:51.195432  669884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:51.195588  669884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 21:35:51.196193  669884 out.go:352] Setting JSON to false
	I1025 21:35:51.197134  669884 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":15495,"bootTime":1729876656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:35:51.197240  669884 start.go:139] virtualization: kvm guest
	I1025 21:35:51.199531  669884 out.go:177] * [addons-413632] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:35:51.201023  669884 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 21:35:51.201015  669884 notify.go:220] Checking for updates...
	I1025 21:35:51.202508  669884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:35:51.203791  669884 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 21:35:51.205122  669884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:35:51.206471  669884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:35:51.207687  669884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:35:51.209343  669884 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 21:35:51.241077  669884 out.go:177] * Using the kvm2 driver based on user configuration
	I1025 21:35:51.242444  669884 start.go:297] selected driver: kvm2
	I1025 21:35:51.242456  669884 start.go:901] validating driver "kvm2" against <nil>
	I1025 21:35:51.242468  669884 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:35:51.243140  669884 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:51.243228  669884 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 21:35:51.258156  669884 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 21:35:51.258213  669884 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 21:35:51.258510  669884 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:35:51.258544  669884 cni.go:84] Creating CNI manager for ""
	I1025 21:35:51.258609  669884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 21:35:51.258622  669884 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 21:35:51.258690  669884 start.go:340] cluster config:
	{Name:addons-413632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 21:35:51.258834  669884 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:51.260726  669884 out.go:177] * Starting "addons-413632" primary control-plane node in "addons-413632" cluster
	I1025 21:35:51.261988  669884 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 21:35:51.262038  669884 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1025 21:35:51.262052  669884 cache.go:56] Caching tarball of preloaded images
	I1025 21:35:51.262141  669884 preload.go:172] Found /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 21:35:51.262157  669884 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1025 21:35:51.262497  669884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/config.json ...
	I1025 21:35:51.262522  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/config.json: {Name:mkca788804c24b7c5ae7d3793d37c40c7bc3ab83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:35:51.262701  669884 start.go:360] acquireMachinesLock for addons-413632: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 21:35:51.262764  669884 start.go:364] duration metric: took 45.057µs to acquireMachinesLock for "addons-413632"
	I1025 21:35:51.262791  669884 start.go:93] Provisioning new machine with config: &{Name:addons-413632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:35:51.262856  669884 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 21:35:51.264370  669884 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1025 21:35:51.264520  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:35:51.264564  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:35:51.278817  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34097
	I1025 21:35:51.279255  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:35:51.279958  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:35:51.279982  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:35:51.280325  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:35:51.280507  669884 main.go:141] libmachine: (addons-413632) Calling .GetMachineName
	I1025 21:35:51.280659  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:35:51.280837  669884 start.go:159] libmachine.API.Create for "addons-413632" (driver="kvm2")
	I1025 21:35:51.280867  669884 client.go:168] LocalClient.Create starting
	I1025 21:35:51.280900  669884 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem
	I1025 21:35:51.462035  669884 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem
	I1025 21:35:51.517275  669884 main.go:141] libmachine: Running pre-create checks...
	I1025 21:35:51.517299  669884 main.go:141] libmachine: (addons-413632) Calling .PreCreateCheck
	I1025 21:35:51.517809  669884 main.go:141] libmachine: (addons-413632) Calling .GetConfigRaw
	I1025 21:35:51.518323  669884 main.go:141] libmachine: Creating machine...
	I1025 21:35:51.518342  669884 main.go:141] libmachine: (addons-413632) Calling .Create
	I1025 21:35:51.518493  669884 main.go:141] libmachine: (addons-413632) creating KVM machine...
	I1025 21:35:51.518513  669884 main.go:141] libmachine: (addons-413632) creating network...
	I1025 21:35:51.519781  669884 main.go:141] libmachine: (addons-413632) DBG | found existing default KVM network
	I1025 21:35:51.520575  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:51.520430  669906 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1025 21:35:51.520606  669884 main.go:141] libmachine: (addons-413632) DBG | created network xml: 
	I1025 21:35:51.520615  669884 main.go:141] libmachine: (addons-413632) DBG | <network>
	I1025 21:35:51.520622  669884 main.go:141] libmachine: (addons-413632) DBG |   <name>mk-addons-413632</name>
	I1025 21:35:51.520627  669884 main.go:141] libmachine: (addons-413632) DBG |   <dns enable='no'/>
	I1025 21:35:51.520632  669884 main.go:141] libmachine: (addons-413632) DBG |   
	I1025 21:35:51.520640  669884 main.go:141] libmachine: (addons-413632) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1025 21:35:51.520647  669884 main.go:141] libmachine: (addons-413632) DBG |     <dhcp>
	I1025 21:35:51.520660  669884 main.go:141] libmachine: (addons-413632) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1025 21:35:51.520675  669884 main.go:141] libmachine: (addons-413632) DBG |     </dhcp>
	I1025 21:35:51.520689  669884 main.go:141] libmachine: (addons-413632) DBG |   </ip>
	I1025 21:35:51.520733  669884 main.go:141] libmachine: (addons-413632) DBG |   
	I1025 21:35:51.520762  669884 main.go:141] libmachine: (addons-413632) DBG | </network>
	I1025 21:35:51.520790  669884 main.go:141] libmachine: (addons-413632) DBG | 
	I1025 21:35:51.525979  669884 main.go:141] libmachine: (addons-413632) DBG | trying to create private KVM network mk-addons-413632 192.168.39.0/24...
	I1025 21:35:51.594310  669884 main.go:141] libmachine: (addons-413632) DBG | private KVM network mk-addons-413632 192.168.39.0/24 created
	I1025 21:35:51.594349  669884 main.go:141] libmachine: (addons-413632) setting up store path in /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632 ...
	I1025 21:35:51.594362  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:51.594305  669906 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:35:51.594381  669884 main.go:141] libmachine: (addons-413632) building disk image from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1025 21:35:51.594569  669884 main.go:141] libmachine: (addons-413632) Downloading /home/jenkins/minikube-integration/19758-661979/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1025 21:35:51.884182  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:51.884040  669906 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa...
	I1025 21:35:52.005446  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:52.005271  669906 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/addons-413632.rawdisk...
	I1025 21:35:52.005487  669884 main.go:141] libmachine: (addons-413632) DBG | Writing magic tar header
	I1025 21:35:52.005523  669884 main.go:141] libmachine: (addons-413632) DBG | Writing SSH key tar header
	I1025 21:35:52.005533  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632 (perms=drwx------)
	I1025 21:35:52.005541  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:52.005387  669906 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632 ...
	I1025 21:35:52.005554  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632
	I1025 21:35:52.005566  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines (perms=drwxr-xr-x)
	I1025 21:35:52.005594  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines
	I1025 21:35:52.005605  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube (perms=drwxr-xr-x)
	I1025 21:35:52.005612  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:35:52.005620  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979
	I1025 21:35:52.005626  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1025 21:35:52.005631  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins
	I1025 21:35:52.005639  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home
	I1025 21:35:52.005660  669884 main.go:141] libmachine: (addons-413632) DBG | skipping /home - not owner
	I1025 21:35:52.005676  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration/19758-661979 (perms=drwxrwxr-x)
	I1025 21:35:52.005687  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 21:35:52.005696  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 21:35:52.005735  669884 main.go:141] libmachine: (addons-413632) creating domain...
	I1025 21:35:52.006986  669884 main.go:141] libmachine: (addons-413632) define libvirt domain using xml: 
	I1025 21:35:52.007001  669884 main.go:141] libmachine: (addons-413632) <domain type='kvm'>
	I1025 21:35:52.007008  669884 main.go:141] libmachine: (addons-413632)   <name>addons-413632</name>
	I1025 21:35:52.007016  669884 main.go:141] libmachine: (addons-413632)   <memory unit='MiB'>4000</memory>
	I1025 21:35:52.007031  669884 main.go:141] libmachine: (addons-413632)   <vcpu>2</vcpu>
	I1025 21:35:52.007040  669884 main.go:141] libmachine: (addons-413632)   <features>
	I1025 21:35:52.007050  669884 main.go:141] libmachine: (addons-413632)     <acpi/>
	I1025 21:35:52.007056  669884 main.go:141] libmachine: (addons-413632)     <apic/>
	I1025 21:35:52.007064  669884 main.go:141] libmachine: (addons-413632)     <pae/>
	I1025 21:35:52.007074  669884 main.go:141] libmachine: (addons-413632)     
	I1025 21:35:52.007080  669884 main.go:141] libmachine: (addons-413632)   </features>
	I1025 21:35:52.007087  669884 main.go:141] libmachine: (addons-413632)   <cpu mode='host-passthrough'>
	I1025 21:35:52.007093  669884 main.go:141] libmachine: (addons-413632)   
	I1025 21:35:52.007099  669884 main.go:141] libmachine: (addons-413632)   </cpu>
	I1025 21:35:52.007104  669884 main.go:141] libmachine: (addons-413632)   <os>
	I1025 21:35:52.007125  669884 main.go:141] libmachine: (addons-413632)     <type>hvm</type>
	I1025 21:35:52.007133  669884 main.go:141] libmachine: (addons-413632)     <boot dev='cdrom'/>
	I1025 21:35:52.007137  669884 main.go:141] libmachine: (addons-413632)     <boot dev='hd'/>
	I1025 21:35:52.007145  669884 main.go:141] libmachine: (addons-413632)     <bootmenu enable='no'/>
	I1025 21:35:52.007149  669884 main.go:141] libmachine: (addons-413632)   </os>
	I1025 21:35:52.007155  669884 main.go:141] libmachine: (addons-413632)   <devices>
	I1025 21:35:52.007159  669884 main.go:141] libmachine: (addons-413632)     <disk type='file' device='cdrom'>
	I1025 21:35:52.007169  669884 main.go:141] libmachine: (addons-413632)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/boot2docker.iso'/>
	I1025 21:35:52.007179  669884 main.go:141] libmachine: (addons-413632)       <target dev='hdc' bus='scsi'/>
	I1025 21:35:52.007186  669884 main.go:141] libmachine: (addons-413632)       <readonly/>
	I1025 21:35:52.007191  669884 main.go:141] libmachine: (addons-413632)     </disk>
	I1025 21:35:52.007202  669884 main.go:141] libmachine: (addons-413632)     <disk type='file' device='disk'>
	I1025 21:35:52.007213  669884 main.go:141] libmachine: (addons-413632)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1025 21:35:52.007220  669884 main.go:141] libmachine: (addons-413632)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/addons-413632.rawdisk'/>
	I1025 21:35:52.007230  669884 main.go:141] libmachine: (addons-413632)       <target dev='hda' bus='virtio'/>
	I1025 21:35:52.007235  669884 main.go:141] libmachine: (addons-413632)     </disk>
	I1025 21:35:52.007244  669884 main.go:141] libmachine: (addons-413632)     <interface type='network'>
	I1025 21:35:52.007250  669884 main.go:141] libmachine: (addons-413632)       <source network='mk-addons-413632'/>
	I1025 21:35:52.007256  669884 main.go:141] libmachine: (addons-413632)       <model type='virtio'/>
	I1025 21:35:52.007261  669884 main.go:141] libmachine: (addons-413632)     </interface>
	I1025 21:35:52.007266  669884 main.go:141] libmachine: (addons-413632)     <interface type='network'>
	I1025 21:35:52.007271  669884 main.go:141] libmachine: (addons-413632)       <source network='default'/>
	I1025 21:35:52.007277  669884 main.go:141] libmachine: (addons-413632)       <model type='virtio'/>
	I1025 21:35:52.007282  669884 main.go:141] libmachine: (addons-413632)     </interface>
	I1025 21:35:52.007288  669884 main.go:141] libmachine: (addons-413632)     <serial type='pty'>
	I1025 21:35:52.007293  669884 main.go:141] libmachine: (addons-413632)       <target port='0'/>
	I1025 21:35:52.007299  669884 main.go:141] libmachine: (addons-413632)     </serial>
	I1025 21:35:52.007304  669884 main.go:141] libmachine: (addons-413632)     <console type='pty'>
	I1025 21:35:52.007310  669884 main.go:141] libmachine: (addons-413632)       <target type='serial' port='0'/>
	I1025 21:35:52.007315  669884 main.go:141] libmachine: (addons-413632)     </console>
	I1025 21:35:52.007323  669884 main.go:141] libmachine: (addons-413632)     <rng model='virtio'>
	I1025 21:35:52.007354  669884 main.go:141] libmachine: (addons-413632)       <backend model='random'>/dev/random</backend>
	I1025 21:35:52.007378  669884 main.go:141] libmachine: (addons-413632)     </rng>
	I1025 21:35:52.007392  669884 main.go:141] libmachine: (addons-413632)     
	I1025 21:35:52.007401  669884 main.go:141] libmachine: (addons-413632)     
	I1025 21:35:52.007409  669884 main.go:141] libmachine: (addons-413632)   </devices>
	I1025 21:35:52.007416  669884 main.go:141] libmachine: (addons-413632) </domain>
	I1025 21:35:52.007428  669884 main.go:141] libmachine: (addons-413632) 
	I1025 21:35:52.011980  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:61:2d:1d in network default
	I1025 21:35:52.012549  669884 main.go:141] libmachine: (addons-413632) starting domain...
	I1025 21:35:52.012567  669884 main.go:141] libmachine: (addons-413632) ensuring networks are active...
	I1025 21:35:52.012578  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:52.013279  669884 main.go:141] libmachine: (addons-413632) Ensuring network default is active
	I1025 21:35:52.013597  669884 main.go:141] libmachine: (addons-413632) Ensuring network mk-addons-413632 is active
	I1025 21:35:52.014118  669884 main.go:141] libmachine: (addons-413632) getting domain XML...
	I1025 21:35:52.014892  669884 main.go:141] libmachine: (addons-413632) creating domain...
	I1025 21:35:53.200526  669884 main.go:141] libmachine: (addons-413632) waiting for IP...
	I1025 21:35:53.201456  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:53.201806  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:53.201886  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:53.201823  669906 retry.go:31] will retry after 247.899943ms: waiting for domain to come up
	I1025 21:35:53.451491  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:53.451996  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:53.452040  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:53.451964  669906 retry.go:31] will retry after 319.364472ms: waiting for domain to come up
	I1025 21:35:53.772482  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:53.772945  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:53.772985  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:53.772904  669906 retry.go:31] will retry after 331.396051ms: waiting for domain to come up
	I1025 21:35:54.105649  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:54.106095  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:54.106136  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:54.106070  669906 retry.go:31] will retry after 553.832242ms: waiting for domain to come up
	I1025 21:35:54.661791  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:54.662234  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:54.662289  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:54.662209  669906 retry.go:31] will retry after 552.909314ms: waiting for domain to come up
	I1025 21:35:55.217847  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:55.218251  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:55.218304  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:55.218238  669906 retry.go:31] will retry after 751.938155ms: waiting for domain to come up
	I1025 21:35:55.972115  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:55.972523  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:55.972561  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:55.972484  669906 retry.go:31] will retry after 1.136661726s: waiting for domain to come up
	I1025 21:35:57.110430  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:57.110930  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:57.110958  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:57.110875  669906 retry.go:31] will retry after 1.015893365s: waiting for domain to come up
	I1025 21:35:58.128288  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:58.128677  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:58.128718  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:58.128640  669906 retry.go:31] will retry after 1.174270445s: waiting for domain to come up
	I1025 21:35:59.304992  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:59.305371  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:59.305398  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:59.305337  669906 retry.go:31] will retry after 2.011576373s: waiting for domain to come up
	I1025 21:36:01.318687  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:01.319085  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:36:01.319114  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:36:01.319070  669906 retry.go:31] will retry after 2.767085669s: waiting for domain to come up
	I1025 21:36:04.089930  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:04.090383  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:36:04.090412  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:36:04.090322  669906 retry.go:31] will retry after 2.389221118s: waiting for domain to come up
	I1025 21:36:06.481504  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:06.482050  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:36:06.482078  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:36:06.481996  669906 retry.go:31] will retry after 4.019884751s: waiting for domain to come up
	I1025 21:36:10.506341  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:10.506867  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:36:10.506909  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:36:10.506847  669906 retry.go:31] will retry after 4.731359986s: waiting for domain to come up
	I1025 21:36:15.242714  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.243078  669884 main.go:141] libmachine: (addons-413632) found domain IP: 192.168.39.223
	I1025 21:36:15.243102  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has current primary IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.243108  669884 main.go:141] libmachine: (addons-413632) reserving static IP address...
	I1025 21:36:15.243524  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find host DHCP lease matching {name: "addons-413632", mac: "52:54:00:7e:f7:68", ip: "192.168.39.223"} in network mk-addons-413632
	I1025 21:36:15.320441  669884 main.go:141] libmachine: (addons-413632) DBG | Getting to WaitForSSH function...
	I1025 21:36:15.320480  669884 main.go:141] libmachine: (addons-413632) reserved static IP address 192.168.39.223 for domain addons-413632
	I1025 21:36:15.320494  669884 main.go:141] libmachine: (addons-413632) waiting for SSH...
	I1025 21:36:15.323692  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.324228  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.324258  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.324407  669884 main.go:141] libmachine: (addons-413632) DBG | Using SSH client type: external
	I1025 21:36:15.324438  669884 main.go:141] libmachine: (addons-413632) DBG | Using SSH private key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa (-rw-------)
	I1025 21:36:15.324483  669884 main.go:141] libmachine: (addons-413632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 21:36:15.324501  669884 main.go:141] libmachine: (addons-413632) DBG | About to run SSH command:
	I1025 21:36:15.324513  669884 main.go:141] libmachine: (addons-413632) DBG | exit 0
	I1025 21:36:15.449310  669884 main.go:141] libmachine: (addons-413632) DBG | SSH cmd err, output: <nil>: 
	I1025 21:36:15.449563  669884 main.go:141] libmachine: (addons-413632) KVM machine creation complete
	I1025 21:36:15.449854  669884 main.go:141] libmachine: (addons-413632) Calling .GetConfigRaw
	I1025 21:36:15.450539  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:15.450724  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:15.450883  669884 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1025 21:36:15.450899  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:15.452241  669884 main.go:141] libmachine: Detecting operating system of created instance...
	I1025 21:36:15.452257  669884 main.go:141] libmachine: Waiting for SSH to be available...
	I1025 21:36:15.452263  669884 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 21:36:15.452272  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.454485  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.454849  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.454882  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.455002  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:15.455190  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.455480  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.455652  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:15.455843  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:15.456058  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:15.456072  669884 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 21:36:15.560496  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:36:15.560551  669884 main.go:141] libmachine: Detecting the provisioner...
	I1025 21:36:15.560565  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.563666  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.564106  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.564131  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.564257  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:15.564510  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.564682  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.564833  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:15.565024  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:15.565210  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:15.565221  669884 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1025 21:36:15.669809  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1025 21:36:15.669909  669884 main.go:141] libmachine: found compatible host: buildroot
	I1025 21:36:15.669919  669884 main.go:141] libmachine: Provisioning with buildroot...
	I1025 21:36:15.669927  669884 main.go:141] libmachine: (addons-413632) Calling .GetMachineName
	I1025 21:36:15.670214  669884 buildroot.go:166] provisioning hostname "addons-413632"
	I1025 21:36:15.670246  669884 main.go:141] libmachine: (addons-413632) Calling .GetMachineName
	I1025 21:36:15.670499  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.673011  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.673378  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.673404  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.673574  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:15.673785  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.673942  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.674077  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:15.674222  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:15.674437  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:15.674453  669884 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-413632 && echo "addons-413632" | sudo tee /etc/hostname
	I1025 21:36:15.790900  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-413632
	
	I1025 21:36:15.790934  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.793816  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.794142  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.794165  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.794322  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:15.794520  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.794675  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.794869  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:15.795108  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:15.795307  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:15.795325  669884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-413632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-413632/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-413632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:36:15.906205  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:36:15.906253  669884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19758-661979/.minikube CaCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19758-661979/.minikube}
	I1025 21:36:15.906287  669884 buildroot.go:174] setting up certificates
	I1025 21:36:15.906302  669884 provision.go:84] configureAuth start
	I1025 21:36:15.906319  669884 main.go:141] libmachine: (addons-413632) Calling .GetMachineName
	I1025 21:36:15.906637  669884 main.go:141] libmachine: (addons-413632) Calling .GetIP
	I1025 21:36:15.909098  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.909455  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.909480  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.909632  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.911884  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.912228  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.912260  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.912356  669884 provision.go:143] copyHostCerts
	I1025 21:36:15.912470  669884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem (1082 bytes)
	I1025 21:36:15.912622  669884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem (1123 bytes)
	I1025 21:36:15.912716  669884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem (1675 bytes)
	I1025 21:36:15.912795  669884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem org=jenkins.addons-413632 san=[127.0.0.1 192.168.39.223 addons-413632 localhost minikube]
	I1025 21:36:16.033557  669884 provision.go:177] copyRemoteCerts
	I1025 21:36:16.033651  669884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:36:16.033692  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.036314  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.036678  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.036708  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.036875  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.037083  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.037256  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.037397  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:16.120241  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 21:36:16.145205  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 21:36:16.169800  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 21:36:16.194364  669884 provision.go:87] duration metric: took 288.042002ms to configureAuth
	I1025 21:36:16.194399  669884 buildroot.go:189] setting minikube options for container-runtime
	I1025 21:36:16.194623  669884 config.go:182] Loaded profile config "addons-413632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 21:36:16.194735  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.197803  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.198372  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.198405  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.198551  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.198734  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.198893  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.199025  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.199189  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:16.199416  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:16.199438  669884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 21:36:16.413395  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 21:36:16.413429  669884 main.go:141] libmachine: Checking connection to Docker...
	I1025 21:36:16.413441  669884 main.go:141] libmachine: (addons-413632) Calling .GetURL
	I1025 21:36:16.414935  669884 main.go:141] libmachine: (addons-413632) DBG | using libvirt version 6000000
	I1025 21:36:16.417165  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.417587  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.417621  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.417792  669884 main.go:141] libmachine: Docker is up and running!
	I1025 21:36:16.417809  669884 main.go:141] libmachine: Reticulating splines...
	I1025 21:36:16.417819  669884 client.go:171] duration metric: took 25.136942248s to LocalClient.Create
	I1025 21:36:16.417850  669884 start.go:167] duration metric: took 25.13703198s to libmachine.API.Create "addons-413632"
	I1025 21:36:16.417861  669884 start.go:293] postStartSetup for "addons-413632" (driver="kvm2")
	I1025 21:36:16.417873  669884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:36:16.417898  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.418102  669884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:36:16.418128  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.420283  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.420601  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.420622  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.420767  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.420928  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.421126  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.421250  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:16.503240  669884 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:36:16.507856  669884 info.go:137] Remote host: Buildroot 2023.02.9
	I1025 21:36:16.507890  669884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/addons for local assets ...
	I1025 21:36:16.507987  669884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/files for local assets ...
	I1025 21:36:16.508017  669884 start.go:296] duration metric: took 90.147535ms for postStartSetup
	I1025 21:36:16.508063  669884 main.go:141] libmachine: (addons-413632) Calling .GetConfigRaw
	I1025 21:36:16.508719  669884 main.go:141] libmachine: (addons-413632) Calling .GetIP
	I1025 21:36:16.511339  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.511665  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.511689  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.511990  669884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/config.json ...
	I1025 21:36:16.512167  669884 start.go:128] duration metric: took 25.249299624s to createHost
	I1025 21:36:16.512191  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.514506  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.514816  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.514843  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.514950  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.515106  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.515317  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.515477  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.515675  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:16.515893  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:16.515906  669884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 21:36:16.617893  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729892176.594671389
	
	I1025 21:36:16.617923  669884 fix.go:216] guest clock: 1729892176.594671389
	I1025 21:36:16.617936  669884 fix.go:229] Guest: 2024-10-25 21:36:16.594671389 +0000 UTC Remote: 2024-10-25 21:36:16.512180095 +0000 UTC m=+25.356671505 (delta=82.491294ms)
	I1025 21:36:16.617995  669884 fix.go:200] guest clock delta is within tolerance: 82.491294ms
	I1025 21:36:16.618003  669884 start.go:83] releasing machines lock for "addons-413632", held for 25.355225557s
	I1025 21:36:16.618054  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.618334  669884 main.go:141] libmachine: (addons-413632) Calling .GetIP
	I1025 21:36:16.621183  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.621678  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.621707  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.621806  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.622303  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.622512  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.622660  669884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:36:16.622720  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.622736  669884 ssh_runner.go:195] Run: cat /version.json
	I1025 21:36:16.622756  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.625259  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.625546  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.625624  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.625651  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.625818  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.625946  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.625958  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.625983  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.626053  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.626179  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.626193  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:16.626393  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.626558  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.626726  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:16.721210  669884 ssh_runner.go:195] Run: systemctl --version
	I1025 21:36:16.727205  669884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 21:36:16.881797  669884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 21:36:16.888251  669884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 21:36:16.888328  669884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:36:16.903932  669884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 21:36:16.903960  669884 start.go:495] detecting cgroup driver to use...
	I1025 21:36:16.904053  669884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:36:16.920935  669884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:36:16.935408  669884 docker.go:217] disabling cri-docker service (if available) ...
	I1025 21:36:16.935483  669884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:36:16.949263  669884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:36:16.962845  669884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 21:36:17.081385  669884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:36:17.230022  669884 docker.go:233] disabling docker service ...
	I1025 21:36:17.230109  669884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:36:17.243663  669884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:36:17.256627  669884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:36:17.373215  669884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:36:17.486145  669884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:36:17.500039  669884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:36:17.518863  669884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1025 21:36:17.518928  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.529000  669884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 21:36:17.529073  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.538844  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.548682  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.558609  669884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:36:17.569456  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.579248  669884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.596099  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.606430  669884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:36:17.615714  669884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 21:36:17.615775  669884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 21:36:17.628384  669884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:36:17.637251  669884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:36:17.744844  669884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 21:36:17.843524  669884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 21:36:17.843651  669884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 21:36:17.848270  669884 start.go:563] Will wait 60s for crictl version
	I1025 21:36:17.848341  669884 ssh_runner.go:195] Run: which crictl
	I1025 21:36:17.852163  669884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 21:36:17.891910  669884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 21:36:17.892035  669884 ssh_runner.go:195] Run: crio --version
	I1025 21:36:17.921336  669884 ssh_runner.go:195] Run: crio --version
	I1025 21:36:17.949912  669884 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1025 21:36:17.951263  669884 main.go:141] libmachine: (addons-413632) Calling .GetIP
	I1025 21:36:17.953798  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:17.954128  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:17.954149  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:17.954391  669884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 21:36:17.958477  669884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:36:17.970796  669884 kubeadm.go:883] updating cluster {Name:addons-413632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 21:36:17.970937  669884 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 21:36:17.971003  669884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:36:18.002920  669884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1025 21:36:18.003015  669884 ssh_runner.go:195] Run: which lz4
	I1025 21:36:18.007126  669884 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 21:36:18.011303  669884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 21:36:18.011340  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1025 21:36:19.275749  669884 crio.go:462] duration metric: took 1.268650384s to copy over tarball
	I1025 21:36:19.275843  669884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 21:36:21.294361  669884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.018480849s)
	I1025 21:36:21.294399  669884 crio.go:469] duration metric: took 2.018613788s to extract the tarball
	I1025 21:36:21.294409  669884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 21:36:21.330953  669884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:36:21.371676  669884 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 21:36:21.371707  669884 cache_images.go:84] Images are preloaded, skipping loading
	I1025 21:36:21.371719  669884 kubeadm.go:934] updating node { 192.168.39.223 8443 v1.31.1 crio true true} ...
	I1025 21:36:21.371887  669884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-413632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 21:36:21.371986  669884 ssh_runner.go:195] Run: crio config
	I1025 21:36:21.419959  669884 cni.go:84] Creating CNI manager for ""
	I1025 21:36:21.419990  669884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 21:36:21.420002  669884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 21:36:21.420027  669884 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-413632 NodeName:addons-413632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 21:36:21.420162  669884 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-413632"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.223"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:36:21.420227  669884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1025 21:36:21.430234  669884 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 21:36:21.430357  669884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 21:36:21.439725  669884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1025 21:36:21.455901  669884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 21:36:21.472025  669884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1025 21:36:21.488209  669884 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I1025 21:36:21.492109  669884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:36:21.504035  669884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:36:21.620483  669884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 21:36:21.637044  669884 certs.go:68] Setting up /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632 for IP: 192.168.39.223
	I1025 21:36:21.637085  669884 certs.go:194] generating shared ca certs ...
	I1025 21:36:21.637104  669884 certs.go:226] acquiring lock for ca certs: {Name:mkb3c0b02dcf77d22d8e250175d24731a46db4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:21.637246  669884 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key
	I1025 21:36:22.081934  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt ...
	I1025 21:36:22.081969  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt: {Name:mk10b67a27736d7b414ef7e521efaaacec6f86c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.082139  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key ...
	I1025 21:36:22.082151  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key: {Name:mk1fd55252adf9d9b1a030feaa4972e9322c045b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.082227  669884 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key
	I1025 21:36:22.304318  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt ...
	I1025 21:36:22.304366  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt: {Name:mk56f11ac9b1532ad69157352f1cd54574c645d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.304576  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key ...
	I1025 21:36:22.304591  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key: {Name:mkc333ffd280e59c54a994e4e4c8add83c7ab6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.304695  669884 certs.go:256] generating profile certs ...
	I1025 21:36:22.304774  669884 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.key
	I1025 21:36:22.304795  669884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt with IP's: []
	I1025 21:36:22.376085  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt ...
	I1025 21:36:22.376120  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: {Name:mkc5e4212d9a8dde3be38daf78f02c0285f89735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.376311  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.key ...
	I1025 21:36:22.376328  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.key: {Name:mk9909e2be6c3c6a3f771f2b423c290c186664aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.376434  669884 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key.1ca821e7
	I1025 21:36:22.376460  669884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt.1ca821e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.223]
	I1025 21:36:22.504167  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt.1ca821e7 ...
	I1025 21:36:22.504204  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt.1ca821e7: {Name:mk464e0a6b34270037fef5f7a4097ab13384dc9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.504400  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key.1ca821e7 ...
	I1025 21:36:22.504419  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key.1ca821e7: {Name:mkb4f690767180584744b21ee4c51de30043fedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.504528  669884 certs.go:381] copying /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt.1ca821e7 -> /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt
	I1025 21:36:22.504626  669884 certs.go:385] copying /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key.1ca821e7 -> /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key
	I1025 21:36:22.504698  669884 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.key
	I1025 21:36:22.504725  669884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.crt with IP's: []
	I1025 21:36:22.899058  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.crt ...
	I1025 21:36:22.899099  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.crt: {Name:mk419a80e72150ee18d6bfe94f69c26e1d08c083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.899295  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.key ...
	I1025 21:36:22.899313  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.key: {Name:mkcf87d4ad053979bc38054885bc6495ae16e62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.899526  669884 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:36:22.899578  669884 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem (1082 bytes)
	I1025 21:36:22.899695  669884 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:36:22.899805  669884 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem (1675 bytes)
	I1025 21:36:22.900486  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:36:22.929285  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 21:36:22.953806  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:36:22.984720  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 21:36:23.019238  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 21:36:23.048916  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 21:36:23.072529  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 21:36:23.096851  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 21:36:23.120865  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:36:23.143864  669884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 21:36:23.160540  669884 ssh_runner.go:195] Run: openssl version
	I1025 21:36:23.166321  669884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:36:23.178390  669884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:36:23.182751  669884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:36:23.182818  669884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:36:23.188658  669884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:36:23.199602  669884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 21:36:23.203757  669884 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 21:36:23.203809  669884 kubeadm.go:392] StartCluster: {Name:addons-413632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 21:36:23.203888  669884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 21:36:23.203928  669884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 21:36:23.240894  669884 cri.go:89] found id: ""
	I1025 21:36:23.240979  669884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 21:36:23.251640  669884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 21:36:23.261709  669884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 21:36:23.271502  669884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:36:23.271525  669884 kubeadm.go:157] found existing configuration files:
	
	I1025 21:36:23.271581  669884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 21:36:23.280930  669884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 21:36:23.281016  669884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 21:36:23.290723  669884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 21:36:23.299803  669884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 21:36:23.299867  669884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 21:36:23.309001  669884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 21:36:23.317926  669884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 21:36:23.317980  669884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 21:36:23.328613  669884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 21:36:23.337760  669884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 21:36:23.337817  669884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 21:36:23.348337  669884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 21:36:23.497806  669884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:36:33.317654  669884 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1025 21:36:33.317727  669884 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 21:36:33.317843  669884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 21:36:33.317975  669884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 21:36:33.318115  669884 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 21:36:33.318214  669884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 21:36:33.319838  669884 out.go:235]   - Generating certificates and keys ...
	I1025 21:36:33.319913  669884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 21:36:33.319977  669884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 21:36:33.320059  669884 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 21:36:33.320127  669884 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1025 21:36:33.320195  669884 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1025 21:36:33.320276  669884 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1025 21:36:33.320372  669884 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1025 21:36:33.320529  669884 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-413632 localhost] and IPs [192.168.39.223 127.0.0.1 ::1]
	I1025 21:36:33.320616  669884 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1025 21:36:33.320753  669884 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-413632 localhost] and IPs [192.168.39.223 127.0.0.1 ::1]
	I1025 21:36:33.320812  669884 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 21:36:33.320872  669884 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 21:36:33.320920  669884 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1025 21:36:33.321032  669884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 21:36:33.321113  669884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 21:36:33.321197  669884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 21:36:33.321259  669884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 21:36:33.321343  669884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 21:36:33.321412  669884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 21:36:33.321598  669884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 21:36:33.321710  669884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 21:36:33.323400  669884 out.go:235]   - Booting up control plane ...
	I1025 21:36:33.323506  669884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 21:36:33.323602  669884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 21:36:33.323691  669884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 21:36:33.323820  669884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:36:33.323928  669884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:36:33.323981  669884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 21:36:33.324094  669884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 21:36:33.324182  669884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 21:36:33.324233  669884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.012776ms
	I1025 21:36:33.324293  669884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1025 21:36:33.324345  669884 kubeadm.go:310] [api-check] The API server is healthy after 5.5014081s
	I1025 21:36:33.324437  669884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 21:36:33.324542  669884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 21:36:33.324592  669884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 21:36:33.324760  669884 kubeadm.go:310] [mark-control-plane] Marking the node addons-413632 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 21:36:33.324825  669884 kubeadm.go:310] [bootstrap-token] Using token: nzx9mz.98l3h3sqt096xbnb
	I1025 21:36:33.326342  669884 out.go:235]   - Configuring RBAC rules ...
	I1025 21:36:33.326431  669884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 21:36:33.326502  669884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 21:36:33.326752  669884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 21:36:33.327130  669884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 21:36:33.327454  669884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 21:36:33.327676  669884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 21:36:33.327963  669884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 21:36:33.328076  669884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 21:36:33.328292  669884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 21:36:33.328345  669884 kubeadm.go:310] 
	I1025 21:36:33.328530  669884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 21:36:33.328545  669884 kubeadm.go:310] 
	I1025 21:36:33.328856  669884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 21:36:33.328871  669884 kubeadm.go:310] 
	I1025 21:36:33.328919  669884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 21:36:33.329006  669884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 21:36:33.329053  669884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 21:36:33.329060  669884 kubeadm.go:310] 
	I1025 21:36:33.329104  669884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 21:36:33.329110  669884 kubeadm.go:310] 
	I1025 21:36:33.329165  669884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 21:36:33.329176  669884 kubeadm.go:310] 
	I1025 21:36:33.329232  669884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 21:36:33.329297  669884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 21:36:33.329355  669884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 21:36:33.329361  669884 kubeadm.go:310] 
	I1025 21:36:33.329437  669884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 21:36:33.329510  669884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 21:36:33.329516  669884 kubeadm.go:310] 
	I1025 21:36:33.329585  669884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nzx9mz.98l3h3sqt096xbnb \
	I1025 21:36:33.329673  669884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a \
	I1025 21:36:33.329694  669884 kubeadm.go:310] 	--control-plane 
	I1025 21:36:33.329701  669884 kubeadm.go:310] 
	I1025 21:36:33.329769  669884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 21:36:33.329775  669884 kubeadm.go:310] 
	I1025 21:36:33.329862  669884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nzx9mz.98l3h3sqt096xbnb \
	I1025 21:36:33.330026  669884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a 
	I1025 21:36:33.330040  669884 cni.go:84] Creating CNI manager for ""
	I1025 21:36:33.330047  669884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 21:36:33.331637  669884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 21:36:33.332891  669884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 21:36:33.343974  669884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 21:36:33.365297  669884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 21:36:33.365418  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-413632 minikube.k8s.io/updated_at=2024_10_25T21_36_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=addons-413632 minikube.k8s.io/primary=true
	I1025 21:36:33.365426  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:33.388517  669884 ops.go:34] apiserver oom_adj: -16
	I1025 21:36:33.517793  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:34.018392  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:34.518537  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:35.017962  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:35.518541  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:36.018446  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:36.518757  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:37.018130  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:37.518182  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:37.617427  669884 kubeadm.go:1113] duration metric: took 4.252098973s to wait for elevateKubeSystemPrivileges
	I1025 21:36:37.617478  669884 kubeadm.go:394] duration metric: took 14.413673011s to StartCluster
	I1025 21:36:37.617504  669884 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:37.617669  669884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 21:36:37.618212  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:37.618454  669884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 21:36:37.618491  669884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:36:37.618546  669884 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 21:36:37.618668  669884 addons.go:69] Setting yakd=true in profile "addons-413632"
	I1025 21:36:37.618687  669884 addons.go:69] Setting ingress=true in profile "addons-413632"
	I1025 21:36:37.618698  669884 addons.go:234] Setting addon yakd=true in "addons-413632"
	I1025 21:36:37.618703  669884 addons.go:69] Setting ingress-dns=true in profile "addons-413632"
	I1025 21:36:37.618703  669884 addons.go:69] Setting volcano=true in profile "addons-413632"
	I1025 21:36:37.618724  669884 addons.go:234] Setting addon ingress-dns=true in "addons-413632"
	I1025 21:36:37.618701  669884 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-413632"
	I1025 21:36:37.618736  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618738  669884 addons.go:69] Setting volumesnapshots=true in profile "addons-413632"
	I1025 21:36:37.618742  669884 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-413632"
	I1025 21:36:37.618749  669884 addons.go:234] Setting addon volumesnapshots=true in "addons-413632"
	I1025 21:36:37.618747  669884 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-413632"
	I1025 21:36:37.618755  669884 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-413632"
	I1025 21:36:37.618777  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618782  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618787  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618785  669884 config.go:182] Loaded profile config "addons-413632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 21:36:37.618842  669884 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-413632"
	I1025 21:36:37.618725  669884 addons.go:234] Setting addon volcano=true in "addons-413632"
	I1025 21:36:37.618877  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618889  669884 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-413632"
	I1025 21:36:37.618917  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619224  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619229  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619239  669884 addons.go:69] Setting storage-provisioner=true in profile "addons-413632"
	I1025 21:36:37.619251  669884 addons.go:234] Setting addon storage-provisioner=true in "addons-413632"
	I1025 21:36:37.618709  669884 addons.go:234] Setting addon ingress=true in "addons-413632"
	I1025 21:36:37.619263  669884 addons.go:69] Setting cloud-spanner=true in profile "addons-413632"
	I1025 21:36:37.619271  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619276  669884 addons.go:69] Setting metrics-server=true in profile "addons-413632"
	I1025 21:36:37.619266  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619287  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619291  669884 addons.go:234] Setting addon metrics-server=true in "addons-413632"
	I1025 21:36:37.619292  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619314  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619324  669884 addons.go:69] Setting default-storageclass=true in profile "addons-413632"
	I1025 21:36:37.619338  669884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-413632"
	I1025 21:36:37.619530  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619561  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619596  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619623  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619651  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619672  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619681  669884 addons.go:69] Setting gcp-auth=true in profile "addons-413632"
	I1025 21:36:37.619686  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619697  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619711  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619252  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619698  669884 mustload.go:65] Loading cluster: addons-413632
	I1025 21:36:37.619776  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619266  669884 addons.go:69] Setting inspektor-gadget=true in profile "addons-413632"
	I1025 21:36:37.619867  669884 addons.go:234] Setting addon inspektor-gadget=true in "addons-413632"
	I1025 21:36:37.619944  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619964  669884 config.go:182] Loaded profile config "addons-413632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 21:36:37.618684  669884 addons.go:69] Setting registry=true in profile "addons-413632"
	I1025 21:36:37.620100  669884 addons.go:234] Setting addon registry=true in "addons-413632"
	I1025 21:36:37.620128  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619227  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.620300  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.620314  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620329  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620385  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.620414  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620473  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619279  669884 addons.go:234] Setting addon cloud-spanner=true in "addons-413632"
	I1025 21:36:37.620508  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620516  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.620477  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.620603  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620775  669884 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-413632"
	I1025 21:36:37.620829  669884 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-413632"
	I1025 21:36:37.620869  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.621494  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.621569  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.621656  669884 out.go:177] * Verifying Kubernetes components...
	I1025 21:36:37.619315  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619227  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.622009  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.641107  669884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:36:37.641337  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.641386  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.641556  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43527
	I1025 21:36:37.641624  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I1025 21:36:37.641740  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43303
	I1025 21:36:37.641816  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I1025 21:36:37.641879  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I1025 21:36:37.641946  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I1025 21:36:37.642090  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.642275  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.642379  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.642566  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.642583  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.642694  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.642866  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.643033  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.643033  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.643046  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.643160  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.643173  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.643217  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.644094  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.644114  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.644207  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.644245  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.644260  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.644274  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.644331  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.644462  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.644481  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.644569  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.644620  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.644894  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.645989  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.646025  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.653408  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.653460  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.653932  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.656122  669884 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-413632"
	I1025 21:36:37.656164  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.656506  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.656540  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.658481  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.658821  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.658873  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.659677  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.659716  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.671456  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I1025 21:36:37.672122  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.672778  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.672798  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.673213  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.673415  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.673882  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40159
	I1025 21:36:37.674423  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.674971  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.674989  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.675384  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.676428  669884 addons.go:234] Setting addon default-storageclass=true in "addons-413632"
	I1025 21:36:37.676468  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.676838  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.676879  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.677641  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.677683  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.679275  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I1025 21:36:37.679722  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.680232  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.680257  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.680605  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.680750  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.681374  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I1025 21:36:37.681926  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.682476  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.682506  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.682568  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.683041  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.683715  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.683753  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.684592  669884 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 21:36:37.685808  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 21:36:37.685837  669884 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 21:36:37.685859  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.687341  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38551
	I1025 21:36:37.687846  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.688337  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.688362  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.688722  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.689290  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.689333  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.689529  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.689561  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.689586  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.689766  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.689946  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.690101  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.690212  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.691106  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45777
	I1025 21:36:37.691467  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.691954  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.691971  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.692310  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.692489  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.694186  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.694555  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.694591  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.695238  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
	I1025 21:36:37.695695  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.696201  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.696226  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.696586  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.696929  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.702659  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I1025 21:36:37.703102  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.703898  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.704050  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36529
	I1025 21:36:37.704661  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.704685  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.704760  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.704947  669884 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1025 21:36:37.705162  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.705741  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.705797  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.706184  669884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 21:36:37.706211  669884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 21:36:37.706235  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.707098  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.707116  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.707524  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.708152  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.708197  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.710211  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.710420  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I1025 21:36:37.710911  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.710933  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.711141  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.711341  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.711511  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.711649  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.711962  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.712060  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I1025 21:36:37.712541  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.712685  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I1025 21:36:37.713055  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.713066  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I1025 21:36:37.713075  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.713425  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.713444  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.713504  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.713605  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.713866  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.714074  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.714088  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.714227  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.714275  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.714511  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.715087  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.715124  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.715343  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.716054  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.716097  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.716367  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.716379  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.716790  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.717046  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.718872  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.721063  669884 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 21:36:37.722494  669884 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 21:36:37.722515  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 21:36:37.722538  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.725679  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I1025 21:36:37.725852  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.726565  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.726583  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.726603  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.726785  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.726988  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.727172  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.727345  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.727796  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.727813  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.728849  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.729095  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.730684  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.731424  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I1025 21:36:37.732015  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.732520  669884 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1025 21:36:37.732618  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.732638  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.733156  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.733831  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.733876  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.734008  669884 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:36:37.734024  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1025 21:36:37.734043  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.736497  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.736840  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.736870  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.737536  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.737726  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.737897  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.738113  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I1025 21:36:37.738107  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.738727  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.739269  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.739289  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.739714  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.739904  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.741442  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.743137  669884 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1025 21:36:37.744377  669884 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1025 21:36:37.745638  669884 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1025 21:36:37.747102  669884 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:36:37.747127  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 21:36:37.747148  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.747455  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I1025 21:36:37.747845  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.748354  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.748371  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.748742  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.748933  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.750844  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.751091  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.751110  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.751342  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37029
	I1025 21:36:37.751538  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I1025 21:36:37.751580  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.751697  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.751793  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.751870  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.751952  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.752153  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.752520  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.752536  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.752548  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.752571  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.753626  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I1025 21:36:37.753922  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.754154  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I1025 21:36:37.754418  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.754434  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.754502  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.754955  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.754962  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.754981  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.755147  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.755899  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.755923  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.756322  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.756556  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.756835  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.757439  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.757642  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.758212  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.758219  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.758259  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.758277  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
	I1025 21:36:37.758689  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.759167  669884 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1025 21:36:37.759243  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.759267  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I1025 21:36:37.759294  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I1025 21:36:37.759648  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.760120  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.760171  669884 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1025 21:36:37.760657  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.760674  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.760216  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.760234  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 21:36:37.760243  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.761141  669884 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:36:37.761159  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 21:36:37.761176  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.761317  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.761331  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.761415  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.761694  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.761803  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.762158  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.762163  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.763412  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.763548  669884 out.go:177]   - Using image docker.io/registry:2.8.3
	I1025 21:36:37.763656  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:37.763697  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I1025 21:36:37.763668  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:37.764074  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:37.764091  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:37.764099  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:37.764105  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:37.764189  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.764262  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.764586  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:37.764620  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:37.764878  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	W1025 21:36:37.764990  669884 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 21:36:37.765038  669884 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 21:36:37.765054  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 21:36:37.765068  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.764735  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.765117  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.765146  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 21:36:37.766226  669884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:36:37.767101  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.767351  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.767553  669884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:36:37.767570  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 21:36:37.767587  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.767660  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 21:36:37.767928  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I1025 21:36:37.768491  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.768936  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.769495  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.769515  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.769697  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.769770  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.769874  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.769924  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.769999  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.770169  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.770472  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.770488  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.770545  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 21:36:37.770687  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.771267  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.771412  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.771492  669884 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1025 21:36:37.771539  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.772002  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.772018  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.771628  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.772199  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.772219  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.772295  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.772500  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.772537  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.772643  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.772970  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.773023  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.773225  669884 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1025 21:36:37.773514  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 21:36:37.773533  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.773925  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 21:36:37.773926  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.774690  669884 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1025 21:36:37.774779  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.775021  669884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 21:36:37.775034  669884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 21:36:37.775049  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.775981  669884 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 21:36:37.776000  669884 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1025 21:36:37.776018  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.777148  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.777249  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 21:36:37.777619  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.777644  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.777753  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.777914  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.778058  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.778183  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.778291  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.778806  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.778826  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.778907  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.779069  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.779232  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.779533  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.779793  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 21:36:37.780895  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.781210  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I1025 21:36:37.781399  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.781422  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.781705  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.781716  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.781870  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.782011  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.782150  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.782207  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.782226  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.782312  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 21:36:37.782580  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.782757  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.783557  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 21:36:37.783581  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 21:36:37.783607  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.783992  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.785802  669884 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 21:36:37.786492  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.786899  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.786925  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.787094  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.787366  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.787542  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.787689  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.788310  669884 out.go:177]   - Using image docker.io/busybox:stable
	W1025 21:36:37.788663  669884 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36878->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.788699  669884 retry.go:31] will retry after 231.853642ms: ssh: handshake failed: read tcp 192.168.39.1:36878->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.789515  669884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:36:37.789534  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 21:36:37.789547  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.792083  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.792503  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.792535  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.792724  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.792881  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.793040  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.793173  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	W1025 21:36:37.793809  669884 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36886->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.793836  669884 retry.go:31] will retry after 246.018745ms: ssh: handshake failed: read tcp 192.168.39.1:36886->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.794644  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I1025 21:36:37.794986  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.795468  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.795481  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.795769  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.795974  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.797359  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.799230  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 21:36:37.800580  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 21:36:37.800601  669884 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 21:36:37.800621  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.803681  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.804087  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.804120  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.804200  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.804336  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.804478  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.804584  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	W1025 21:36:37.805158  669884 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36898->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.805184  669884 retry.go:31] will retry after 207.690543ms: ssh: handshake failed: read tcp 192.168.39.1:36898->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:38.046751  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:36:38.067936  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 21:36:38.138164  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 21:36:38.149792  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 21:36:38.149820  669884 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 21:36:38.199326  669884 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 21:36:38.199365  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1025 21:36:38.256505  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:36:38.267795  669884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 21:36:38.267820  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 21:36:38.273880  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 21:36:38.292887  669884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 21:36:38.292904  669884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 21:36:38.296460  669884 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 21:36:38.296487  669884 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 21:36:38.334733  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 21:36:38.334765  669884 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 21:36:38.390784  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:36:38.411777  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:36:38.446406  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 21:36:38.603498  669884 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:36:38.603526  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 21:36:38.627035  669884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 21:36:38.627062  669884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 21:36:38.641854  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 21:36:38.641895  669884 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 21:36:38.644192  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:36:38.646990  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 21:36:38.647012  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 21:36:38.650357  669884 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 21:36:38.650373  669884 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 21:36:38.799711  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 21:36:38.799737  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 21:36:38.842104  669884 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 21:36:38.842138  669884 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 21:36:38.923930  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 21:36:38.923976  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 21:36:38.929891  669884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:36:38.929915  669884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 21:36:38.945464  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:36:38.956741  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 21:36:39.178149  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:36:39.198626  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 21:36:39.198682  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 21:36:39.212521  669884 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 21:36:39.212555  669884 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 21:36:39.468762  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 21:36:39.468804  669884 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 21:36:39.562296  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.51549801s)
	I1025 21:36:39.562378  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:39.562391  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:39.562736  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:39.562758  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:39.562768  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:39.562768  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:39.562778  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:39.563093  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:39.563097  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:39.563108  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:39.601643  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 21:36:39.601685  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 21:36:39.773794  669884 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:36:39.773823  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 21:36:39.917657  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 21:36:39.917696  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 21:36:39.947976  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.879994441s)
	I1025 21:36:39.948047  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:39.948060  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:39.948420  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:39.948476  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:39.948485  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:39.948499  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:39.948507  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:39.948998  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:39.949043  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:39.949049  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:40.199367  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:36:40.241745  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 21:36:40.241777  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 21:36:40.474480  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 21:36:40.474522  669884 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 21:36:40.913343  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.775135867s)
	I1025 21:36:40.913420  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:40.913434  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:40.913782  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:40.913796  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:40.913803  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:40.913813  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:40.913821  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:40.914063  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:40.914079  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:40.914088  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:40.931681  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 21:36:40.931705  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 21:36:41.089584  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 21:36:41.089613  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 21:36:41.321570  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:36:41.321604  669884 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 21:36:41.605384  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:36:42.070383  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.813820406s)
	I1025 21:36:42.070412  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.796497873s)
	I1025 21:36:42.070451  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.070464  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.070511  669884 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.777585152s)
	I1025 21:36:42.070538  669884 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.777598326s)
	I1025 21:36:42.070561  669884 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1025 21:36:42.070462  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.070649  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.070843  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:42.070887  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.070895  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.070913  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.070921  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.071005  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:42.071013  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.071027  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.071056  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.071069  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.071332  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.071347  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.071498  669884 node_ready.go:35] waiting up to 6m0s for node "addons-413632" to be "Ready" ...
	I1025 21:36:42.071656  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.071671  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.092429  669884 node_ready.go:49] node "addons-413632" has status "Ready":"True"
	I1025 21:36:42.092455  669884 node_ready.go:38] duration metric: took 20.910347ms for node "addons-413632" to be "Ready" ...
	I1025 21:36:42.092467  669884 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:36:42.200324  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.200349  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.200824  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.200848  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.240379  669884 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace to be "Ready" ...
	I1025 21:36:42.613955  669884 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-413632" context rescaled to 1 replicas
	I1025 21:36:44.260740  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:44.810491  669884 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 21:36:44.810547  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:44.814221  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:44.814713  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:44.814761  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:44.814932  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:44.815168  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:44.815371  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:44.815534  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:45.448750  669884 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 21:36:45.653482  669884 addons.go:234] Setting addon gcp-auth=true in "addons-413632"
	I1025 21:36:45.653564  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:45.653986  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:45.654038  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:45.669867  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I1025 21:36:45.670414  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:45.671092  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:45.671117  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:45.671505  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:45.672009  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:45.672059  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:45.687213  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35147
	I1025 21:36:45.687784  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:45.688332  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:45.688362  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:45.688703  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:45.688896  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:45.690497  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:45.690728  669884 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 21:36:45.690754  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:45.693423  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:45.693894  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:45.693920  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:45.694133  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:45.694289  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:45.694453  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:45.694613  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:46.171720  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.759899873s)
	I1025 21:36:46.171792  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.171805  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.171813  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.725370464s)
	I1025 21:36:46.172203  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172228  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172305  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.528075813s)
	I1025 21:36:46.172318  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.172339  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172351  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172378  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.172388  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.172396  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172503  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.226991786s)
	I1025 21:36:46.172531  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172545  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172612  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.172639  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.172655  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172662  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172782  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.781781741s)
	I1025 21:36:46.172797  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.215996495s)
	I1025 21:36:46.172815  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172829  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172830  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172842  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.173092  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.173105  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.173114  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.173123  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.173181  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.994956709s)
	I1025 21:36:46.173200  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.173211  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.173231  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.173430  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.173511  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.173518  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.173537  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.173543  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.173564  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.173679  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.974162211s)
	I1025 21:36:46.173694  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.173710  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	W1025 21:36:46.173720  669884 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:36:46.173764  669884 retry.go:31] will retry after 304.949065ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:36:46.174021  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.174034  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.174060  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.174129  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.174280  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.174289  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.174298  669884 addons.go:475] Verifying addon ingress=true in "addons-413632"
	I1025 21:36:46.174303  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.174313  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.174322  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.174336  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.174639  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.174650  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.174659  669884 addons.go:475] Verifying addon registry=true in "addons-413632"
	I1025 21:36:46.175142  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.175214  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.175233  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.175240  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.175754  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.175903  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.175926  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.175953  669884 addons.go:475] Verifying addon metrics-server=true in "addons-413632"
	I1025 21:36:46.176553  669884 out.go:177] * Verifying registry addon...
	I1025 21:36:46.177421  669884 out.go:177] * Verifying ingress addon...
	I1025 21:36:46.172402  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.179086  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.179112  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.179119  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.179900  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.179934  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.179949  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.179956  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.179963  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.180297  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.180365  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.180404  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.180878  669884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 21:36:46.181135  669884 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 21:36:46.181818  669884 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-413632 service yakd-dashboard -n yakd-dashboard
	
	I1025 21:36:46.199682  669884 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 21:36:46.199707  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:46.201641  669884 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 21:36:46.201659  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:46.245196  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.245223  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.245507  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.245528  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.479519  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:36:46.686397  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:46.687757  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:46.748362  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:47.226092  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:47.226246  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:47.701138  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:47.701169  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:47.994796  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.389333634s)
	I1025 21:36:47.994829  669884 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.304081148s)
	I1025 21:36:47.994872  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:47.994889  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:47.995163  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:47.995218  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:47.995235  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:47.995246  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:47.995703  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:47.995745  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:47.995770  669884 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-413632"
	I1025 21:36:47.997316  669884 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 21:36:47.997433  669884 out.go:177] * Verifying csi-hostpath-driver addon...
	I1025 21:36:47.999053  669884 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1025 21:36:47.999903  669884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 21:36:48.000716  669884 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 21:36:48.000735  669884 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 21:36:48.021039  669884 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 21:36:48.021066  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:48.153906  669884 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 21:36:48.153937  669884 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 21:36:48.185868  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:48.187785  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:48.218996  669884 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:36:48.219027  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 21:36:48.238091  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.758508808s)
	I1025 21:36:48.238166  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:48.238189  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:48.238464  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:48.238483  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:48.238493  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:48.238501  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:48.238837  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:48.238855  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:48.280049  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:36:48.505452  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:48.686320  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:48.686449  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:49.005035  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:49.193331  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:49.193742  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:49.266396  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:49.452054  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.17195323s)
	I1025 21:36:49.452112  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:49.452124  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:49.452467  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:49.452491  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:49.452500  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:49.452501  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:49.452508  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:49.452728  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:49.452758  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:49.452769  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:49.453814  669884 addons.go:475] Verifying addon gcp-auth=true in "addons-413632"
	I1025 21:36:49.455518  669884 out.go:177] * Verifying gcp-auth addon...
	I1025 21:36:49.458922  669884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 21:36:49.527285  669884 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 21:36:49.527309  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:49.553333  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:49.698930  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:49.699541  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:49.962567  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:50.006150  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:50.187065  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:50.187152  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:50.465552  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:50.565068  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:50.686388  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:50.686444  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:50.962726  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:51.006882  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:51.187075  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:51.187097  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:51.468523  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:51.504915  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:51.687157  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:51.687801  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:51.747064  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:51.963162  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:52.005215  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:52.185114  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:52.186902  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:52.463821  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:52.505607  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:52.685670  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:52.685860  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:52.963246  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:53.004488  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:53.185560  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:53.185737  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:53.464796  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:53.505162  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:53.685492  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:53.685624  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:53.963411  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:54.005192  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:54.186114  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:54.186439  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:54.246683  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:54.463183  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:54.504876  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:54.685180  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:54.685201  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:54.962573  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:55.005612  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:55.185644  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:55.185723  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:55.463540  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:55.504772  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:55.687650  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:55.687874  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:55.963028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:56.004929  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:56.184722  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:56.185174  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:56.463305  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:56.504224  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:56.686830  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:56.686979  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:56.746795  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:56.963112  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:57.004973  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:57.185698  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:57.186239  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:57.471739  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:57.574143  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:57.685414  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:57.685804  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:57.963229  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:58.004448  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:58.185324  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:58.186017  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:58.464395  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:58.566576  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:58.684988  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:58.685275  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:58.747092  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:58.962027  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:59.005621  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:59.185869  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:59.186567  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:59.463707  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:59.504530  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:59.686848  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:59.687538  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:59.963423  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:00.004285  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:00.185874  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:00.186291  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:00.463518  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:00.565353  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:00.684714  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:00.685540  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:00.748468  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:00.962992  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:01.005353  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:01.185494  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:01.185838  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:01.462788  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:01.505087  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:01.685767  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:01.686059  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:01.962949  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:02.006370  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:02.186077  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:02.186675  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:02.463809  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:02.505942  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:02.685830  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:02.686645  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:02.962164  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:03.005381  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:03.185302  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:03.186445  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:03.246434  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:03.462813  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:03.505751  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:03.686957  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:03.687520  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:03.962039  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:04.008452  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:04.186326  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:04.186375  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:04.463172  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:04.995859  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:04.996067  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:04.996109  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:04.996452  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:05.005095  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:05.184832  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:05.185227  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:05.246717  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:05.463966  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:05.505445  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:05.685555  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:05.685887  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:05.962542  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:06.004811  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:06.186301  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:06.186469  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:06.463519  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:06.504752  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:06.686089  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:06.686477  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:06.962995  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:07.009097  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:07.185456  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:07.185702  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:07.469954  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:07.572805  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:07.685327  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:07.685982  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:07.747947  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:07.962881  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:08.004929  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:08.190593  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:08.190678  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:08.464117  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:08.505237  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:08.685120  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:08.686638  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:08.962894  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:09.005012  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:09.185937  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:09.186508  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:09.463079  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:09.504819  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:09.684709  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:09.685625  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:09.962861  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:10.005280  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:10.185354  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:10.185716  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:10.247110  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:10.463834  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:10.504653  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:10.686008  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:10.686675  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:10.964029  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:11.005495  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:11.189572  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:11.190102  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:11.464541  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:11.504473  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:11.686560  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:11.687994  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:11.962556  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:12.005908  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:12.185564  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:12.185923  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:12.247206  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:12.464806  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:12.504816  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:12.686479  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:12.686674  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:12.962832  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:13.004859  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:13.185723  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:13.185942  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:13.464043  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:13.504913  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:13.686203  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:13.686447  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:13.963087  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:14.005456  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:14.185482  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:14.185725  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:14.247360  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:14.465142  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:14.505355  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:14.686639  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:14.686920  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:14.963004  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:15.005220  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:15.185061  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:15.186269  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:15.464985  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:15.505642  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:15.687211  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:15.687752  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:15.962994  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:16.006502  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:16.185176  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:16.185529  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:16.465028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:16.505068  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:16.686517  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:16.688226  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:16.748007  669884 pod_ready.go:93] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.748034  669884 pod_ready.go:82] duration metric: took 34.507625324s for pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.748044  669884 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9bd5k" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.755175  669884 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-9bd5k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9bd5k" not found
	I1025 21:37:16.755203  669884 pod_ready.go:82] duration metric: took 7.152705ms for pod "coredns-7c65d6cfc9-9bd5k" in "kube-system" namespace to be "Ready" ...
	E1025 21:37:16.755214  669884 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-9bd5k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9bd5k" not found
	I1025 21:37:16.755221  669884 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9tqzw" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.764267  669884 pod_ready.go:93] pod "coredns-7c65d6cfc9-9tqzw" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.764290  669884 pod_ready.go:82] duration metric: took 9.063153ms for pod "coredns-7c65d6cfc9-9tqzw" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.764300  669884 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.777620  669884 pod_ready.go:93] pod "etcd-addons-413632" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.777648  669884 pod_ready.go:82] duration metric: took 13.338735ms for pod "etcd-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.777661  669884 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.781949  669884 pod_ready.go:93] pod "kube-apiserver-addons-413632" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.781965  669884 pod_ready.go:82] duration metric: took 4.290302ms for pod "kube-apiserver-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.781974  669884 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.945055  669884 pod_ready.go:93] pod "kube-controller-manager-addons-413632" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.945081  669884 pod_ready.go:82] duration metric: took 163.101197ms for pod "kube-controller-manager-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.945095  669884 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jg272" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.963259  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:17.004047  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:17.184620  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:17.184934  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:17.344309  669884 pod_ready.go:93] pod "kube-proxy-jg272" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:17.344334  669884 pod_ready.go:82] duration metric: took 399.232835ms for pod "kube-proxy-jg272" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:17.344346  669884 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:17.465557  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:17.504440  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:17.685992  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:17.686117  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:17.743994  669884 pod_ready.go:93] pod "kube-scheduler-addons-413632" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:17.744023  669884 pod_ready.go:82] duration metric: took 399.669334ms for pod "kube-scheduler-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:17.744038  669884 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-k298m" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:17.962857  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:18.414049  669884 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-k298m" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:18.414076  669884 pod_ready.go:82] duration metric: took 670.03064ms for pod "nvidia-device-plugin-daemonset-k298m" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:18.414085  669884 pod_ready.go:39] duration metric: took 36.321608322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:37:18.414106  669884 api_server.go:52] waiting for apiserver process to appear ...
	I1025 21:37:18.414170  669884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:37:18.419042  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:18.419419  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:18.419433  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:18.448353  669884 api_server.go:72] duration metric: took 40.829819368s to wait for apiserver process to appear ...
	I1025 21:37:18.448386  669884 api_server.go:88] waiting for apiserver healthz status ...
	I1025 21:37:18.448409  669884 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1025 21:37:18.452931  669884 api_server.go:279] https://192.168.39.223:8443/healthz returned 200:
	ok
	I1025 21:37:18.454176  669884 api_server.go:141] control plane version: v1.31.1
	I1025 21:37:18.454210  669884 api_server.go:131] duration metric: took 5.81756ms to wait for apiserver health ...
	I1025 21:37:18.454219  669884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 21:37:18.462180  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:18.463953  669884 system_pods.go:59] 18 kube-system pods found
	I1025 21:37:18.463985  669884 system_pods.go:61] "amd-gpu-device-plugin-967pw" [cdd329aa-b9f0-4233-b2ab-db63265d7d0c] Running
	I1025 21:37:18.463990  669884 system_pods.go:61] "coredns-7c65d6cfc9-9tqzw" [88e7f6a7-96fd-4c16-b0df-4feb71acbfe4] Running
	I1025 21:37:18.463997  669884 system_pods.go:61] "csi-hostpath-attacher-0" [0a815931-e689-4cde-b86e-48ce8d155a06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 21:37:18.464006  669884 system_pods.go:61] "csi-hostpath-resizer-0" [b9c13546-2b70-4d29-a94b-c906bb7cab5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:37:18.464016  669884 system_pods.go:61] "csi-hostpathplugin-dp8sx" [eb7167c1-6de0-4a01-b052-10f732186a02] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:37:18.464022  669884 system_pods.go:61] "etcd-addons-413632" [5a85e992-3ea8-4882-a10d-4b3af5a577de] Running
	I1025 21:37:18.464028  669884 system_pods.go:61] "kube-apiserver-addons-413632" [dfbfa04d-4f8a-439a-bdf8-ce150e0511d6] Running
	I1025 21:37:18.464032  669884 system_pods.go:61] "kube-controller-manager-addons-413632" [f7d9dfe4-d9e4-4bfd-9767-ec5521fe89c9] Running
	I1025 21:37:18.464038  669884 system_pods.go:61] "kube-ingress-dns-minikube" [1fcf268d-cd8a-41eb-aeca-eb9e2bd5a187] Running
	I1025 21:37:18.464042  669884 system_pods.go:61] "kube-proxy-jg272" [d3a14441-9149-4a18-b5d6-06302835d38b] Running
	I1025 21:37:18.464046  669884 system_pods.go:61] "kube-scheduler-addons-413632" [9634b8e0-0e8a-4983-907d-c1bd095f3cc8] Running
	I1025 21:37:18.464057  669884 system_pods.go:61] "metrics-server-84c5f94fbc-7drm7" [9dd37623-d67c-48a2-8e11-18a05cd71be2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 21:37:18.464066  669884 system_pods.go:61] "nvidia-device-plugin-daemonset-k298m" [b318342e-76c3-477e-8d99-38359ebef6bf] Running
	I1025 21:37:18.464072  669884 system_pods.go:61] "registry-66c9cd494c-xj8xz" [e20b3155-ea05-4981-a773-3c2c98521771] Running
	I1025 21:37:18.464083  669884 system_pods.go:61] "registry-proxy-kpm4c" [211d5f74-7b9d-4d8c-bcdb-bce343e97d06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:37:18.464095  669884 system_pods.go:61] "snapshot-controller-56fcc65765-d6wjv" [cfcf8f38-ae62-4726-9acd-d9813a6a11e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:37:18.464104  669884 system_pods.go:61] "snapshot-controller-56fcc65765-f8nh5" [73b07a02-551a-4a03-b0f4-a0f1d7dde2b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:37:18.464109  669884 system_pods.go:61] "storage-provisioner" [f755426f-779c-44a0-9058-958be3222114] Running
	I1025 21:37:18.464121  669884 system_pods.go:74] duration metric: took 9.895132ms to wait for pod list to return data ...
	I1025 21:37:18.464132  669884 default_sa.go:34] waiting for default service account to be created ...
	I1025 21:37:18.504680  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:18.544274  669884 default_sa.go:45] found service account: "default"
	I1025 21:37:18.544307  669884 default_sa.go:55] duration metric: took 80.16714ms for default service account to be created ...
	I1025 21:37:18.544318  669884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 21:37:18.694764  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:18.694960  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:18.800516  669884 system_pods.go:86] 18 kube-system pods found
	I1025 21:37:18.800558  669884 system_pods.go:89] "amd-gpu-device-plugin-967pw" [cdd329aa-b9f0-4233-b2ab-db63265d7d0c] Running
	I1025 21:37:18.800573  669884 system_pods.go:89] "coredns-7c65d6cfc9-9tqzw" [88e7f6a7-96fd-4c16-b0df-4feb71acbfe4] Running
	I1025 21:37:18.800582  669884 system_pods.go:89] "csi-hostpath-attacher-0" [0a815931-e689-4cde-b86e-48ce8d155a06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 21:37:18.800649  669884 system_pods.go:89] "csi-hostpath-resizer-0" [b9c13546-2b70-4d29-a94b-c906bb7cab5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:37:18.800674  669884 system_pods.go:89] "csi-hostpathplugin-dp8sx" [eb7167c1-6de0-4a01-b052-10f732186a02] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:37:18.800682  669884 system_pods.go:89] "etcd-addons-413632" [5a85e992-3ea8-4882-a10d-4b3af5a577de] Running
	I1025 21:37:18.800691  669884 system_pods.go:89] "kube-apiserver-addons-413632" [dfbfa04d-4f8a-439a-bdf8-ce150e0511d6] Running
	I1025 21:37:18.800702  669884 system_pods.go:89] "kube-controller-manager-addons-413632" [f7d9dfe4-d9e4-4bfd-9767-ec5521fe89c9] Running
	I1025 21:37:18.800715  669884 system_pods.go:89] "kube-ingress-dns-minikube" [1fcf268d-cd8a-41eb-aeca-eb9e2bd5a187] Running
	I1025 21:37:18.800722  669884 system_pods.go:89] "kube-proxy-jg272" [d3a14441-9149-4a18-b5d6-06302835d38b] Running
	I1025 21:37:18.800731  669884 system_pods.go:89] "kube-scheduler-addons-413632" [9634b8e0-0e8a-4983-907d-c1bd095f3cc8] Running
	I1025 21:37:18.800740  669884 system_pods.go:89] "metrics-server-84c5f94fbc-7drm7" [9dd37623-d67c-48a2-8e11-18a05cd71be2] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 21:37:18.800750  669884 system_pods.go:89] "nvidia-device-plugin-daemonset-k298m" [b318342e-76c3-477e-8d99-38359ebef6bf] Running
	I1025 21:37:18.800757  669884 system_pods.go:89] "registry-66c9cd494c-xj8xz" [e20b3155-ea05-4981-a773-3c2c98521771] Running
	I1025 21:37:18.800771  669884 system_pods.go:89] "registry-proxy-kpm4c" [211d5f74-7b9d-4d8c-bcdb-bce343e97d06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:37:18.800784  669884 system_pods.go:89] "snapshot-controller-56fcc65765-d6wjv" [cfcf8f38-ae62-4726-9acd-d9813a6a11e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:37:18.800797  669884 system_pods.go:89] "snapshot-controller-56fcc65765-f8nh5" [73b07a02-551a-4a03-b0f4-a0f1d7dde2b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:37:18.800803  669884 system_pods.go:89] "storage-provisioner" [f755426f-779c-44a0-9058-958be3222114] Running
	I1025 21:37:18.800814  669884 system_pods.go:126] duration metric: took 256.487942ms to wait for k8s-apps to be running ...
	I1025 21:37:18.800827  669884 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 21:37:18.800884  669884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:37:18.814841  669884 system_svc.go:56] duration metric: took 14.005631ms WaitForService to wait for kubelet
	I1025 21:37:18.814874  669884 kubeadm.go:582] duration metric: took 41.196346797s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:37:18.814898  669884 node_conditions.go:102] verifying NodePressure condition ...
	I1025 21:37:18.944374  669884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 21:37:18.944403  669884 node_conditions.go:123] node cpu capacity is 2
	I1025 21:37:18.944415  669884 node_conditions.go:105] duration metric: took 129.510826ms to run NodePressure ...
	I1025 21:37:18.944427  669884 start.go:241] waiting for startup goroutines ...
	I1025 21:37:18.961934  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:19.004629  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:19.186640  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:19.187487  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:19.466592  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:19.504272  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:19.685785  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:19.686505  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:19.966530  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:20.004324  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:20.187313  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:20.187337  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:20.462304  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:20.504298  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:20.685505  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:20.685906  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:20.963286  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:21.005022  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:21.186557  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:21.186723  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:21.465127  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:21.504699  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:21.685886  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:21.686061  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:21.963016  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:22.006000  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:22.185555  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:22.186072  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:22.466115  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:22.506457  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:22.685879  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:22.687727  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:22.963299  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:23.004995  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:23.185551  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:23.185968  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:23.865885  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:23.866062  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:23.866355  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:23.866568  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:23.963413  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:24.004711  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:24.185883  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:24.186723  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:24.464383  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:24.504365  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:24.685073  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:24.686060  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:24.962865  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:25.006690  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:25.185033  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:25.185232  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:25.464883  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:25.504646  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:25.685415  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:25.686723  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:25.962853  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:26.005234  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:26.185694  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:26.186058  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:26.464716  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:26.504476  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:26.686480  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:26.686949  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:26.964396  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:27.005133  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:27.186028  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:27.186357  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:27.465719  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:27.505392  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:27.685557  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:27.686484  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:27.962519  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:28.004368  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:28.185840  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:28.186254  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:28.465758  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:28.504829  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:28.685304  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:28.686274  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:28.963308  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:29.004680  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:29.186130  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:29.186632  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:29.464402  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:29.504338  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:29.685332  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:29.685385  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:29.962680  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:30.004399  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:30.187059  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:30.187543  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:30.463464  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:30.504618  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:30.685540  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:30.686371  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:30.962043  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:31.005510  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:31.185085  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:31.185913  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:31.465950  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:31.567422  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:31.686288  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:31.686379  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:31.962919  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:32.005490  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:32.186644  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:32.186718  669884 kapi.go:107] duration metric: took 46.005837011s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 21:37:32.462934  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:32.504566  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:32.685652  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:32.964223  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:33.011820  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:33.185922  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:33.462864  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:33.504869  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:33.686368  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:33.963534  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:34.065483  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:34.186073  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:34.479551  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:34.506200  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:34.685820  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:34.961995  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:35.005522  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:35.186432  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:35.462812  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:35.505250  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:35.686097  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:35.963661  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:36.004592  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:36.186012  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:36.465131  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:36.505230  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:36.685859  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:36.964849  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:37.007003  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:37.185720  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:37.462955  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:37.505045  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:37.685537  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:38.247437  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:38.247880  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:38.250090  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:38.470490  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:38.507376  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:38.685885  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:38.962644  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:39.004183  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:39.185641  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:39.462028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:39.504736  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:39.686029  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:39.962633  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:40.008440  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:40.187238  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:40.463437  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:40.565280  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:40.687840  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:40.963174  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:41.005012  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:41.185368  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:41.466695  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:41.505336  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:41.685914  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:41.963343  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:42.008000  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:42.186052  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:42.463010  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:42.506342  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:42.685671  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:42.962967  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:43.004826  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:43.186849  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:43.464028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:43.505585  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:43.945995  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:43.962505  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:44.004776  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:44.187249  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:44.468358  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:44.504184  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:44.685092  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:44.962745  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:45.004905  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:45.185217  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:45.465499  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:45.504175  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:45.685943  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:45.965052  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:46.063729  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:46.186352  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:46.462926  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:46.504816  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:46.686674  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:46.963779  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:47.008081  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:47.193998  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:47.464742  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:47.504856  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:47.684972  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:47.962940  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:48.005198  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:48.187097  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:48.466254  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:48.505773  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:48.689742  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:48.963898  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:49.065355  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:49.191993  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:49.465028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:49.505302  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:49.691463  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:49.965147  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:50.005607  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:50.185623  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:50.462684  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:50.504588  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:50.685916  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:50.962599  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:51.004663  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:51.185329  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:51.465723  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:51.505015  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:51.691392  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:51.964018  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:52.005194  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:52.185871  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:52.463688  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:52.505058  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:52.685320  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:52.965559  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:53.006048  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:53.186971  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:53.463311  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:53.506823  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:53.686464  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:53.963192  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:54.005912  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:54.185427  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:54.463796  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:54.505152  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:54.685666  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:54.963616  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:55.005483  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:55.186176  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:55.462706  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:55.504851  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:55.686635  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:55.962960  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:56.005017  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:56.186488  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:56.464821  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:56.505075  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:56.684892  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:56.962694  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:57.004831  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:57.185719  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:57.464857  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:57.504391  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:57.685960  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:57.963361  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:58.004880  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:58.186839  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:58.813120  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:58.813259  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:58.813429  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:58.963265  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:59.069508  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:59.185794  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:59.468618  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:59.519849  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:59.687100  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:59.962796  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:00.004761  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:38:00.186531  669884 kapi.go:107] duration metric: took 1m14.005390978s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 21:38:00.462339  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:00.506493  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:38:00.962631  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:01.004816  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:38:01.466333  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:01.504204  669884 kapi.go:107] duration metric: took 1m13.504297233s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 21:38:01.963877  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:02.466828  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:02.962581  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:03.463069  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:03.962824  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:04.463167  669884 kapi.go:107] duration metric: took 1m15.004246383s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 21:38:04.465316  669884 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-413632 cluster.
	I1025 21:38:04.466837  669884 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 21:38:04.468223  669884 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 21:38:04.469684  669884 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, ingress-dns, default-storageclass, inspektor-gadget, metrics-server, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1025 21:38:04.471028  669884 addons.go:510] duration metric: took 1m26.852485407s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner ingress-dns default-storageclass inspektor-gadget metrics-server storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1025 21:38:04.471073  669884 start.go:246] waiting for cluster config update ...
	I1025 21:38:04.471096  669884 start.go:255] writing updated cluster config ...
	I1025 21:38:04.471380  669884 ssh_runner.go:195] Run: rm -f paused
	I1025 21:38:04.523185  669884 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 21:38:04.524936  669884 out.go:177] * Done! kubectl is now configured to use "addons-413632" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.105548909Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892477105520842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587563,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ec6daee-3b49-4c1f-980a-03ecbdb351f0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.106231237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99d969ae-a5a1-4102-a679-3496e3873631 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.106302527Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99d969ae-a5a1-4102-a679-3496e3873631 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.106737000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:977d398946a4ad83b12143788ef1a6022cd040aa4968cc7dd6642b0f22e6772f,PodSandboxId:9fa681ce0b8a9b34ae6515e73eece8d706c1d65b857230f1ff8347c919bdd8f9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729892337105974621,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6f49bba-8fb4-4037-8ccc-f07fcab0a94d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea98b83c409c23d1000bf585061118a404b4300c4c4a99241371020d65bce89a,PodSandboxId:85115ae19e3d07bf0a5c5dfe9ffd41781512b3eb6d999b46d155d4365054d069,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729892290509733401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a5d80ed-a009-46e7-b426-d6655a8413e2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11734f5213e971ce9e4489134d109c8d51e3da7d4655fbbdb15e52ae97a59784,PodSandboxId:e4a0878424add9df19df4af3655cbd6738f7b02b93833931a120962c3a5acbc9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1729892278943836132,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-8wjnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6cf8d1e-7736-4645-bd6a-66c80211699d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1f6fe97ea36f1e2a110879524c4973699f806b2320bd87507395da1fc958e487,PodSandboxId:f7dbc18556c8b04fed96012d3857eb0efb80da39e61a3efcc5c448d1c331370c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729892260365232188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b5m9g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ee92a599-9841-48e9-982a-d17fc1d13c58,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee1a6d62cd4cf06f7cbfb7314711fd13a251d125f0f49ccf20de06680d8a861,PodSandboxId:c18b4d806f0bbec3bcf793fb957031012c0301ecb03c9fd6dd51b4901e3f6b5e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729892260214731842,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lj6lc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf61e362-4c2b-4774-adce-a0fcaa06c142,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec8292992674b49697d6af126040a9da68ef44df2783c1c4b7861157cf32ad,PodSandboxId:bc1b7713e38da856981f71dc33cb552cd5cbaaa712d91a361e881c48c45808df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729892238523121117,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7drm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dd37623-d67c-48a2-8e11-18a05cd71be2,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238e6338b162c61e6e2a8be58e78ed840f82fafa4d3c248d31cd6bc72b89029,PodSandboxId:c8ec7d7eeb3613a3013e2cd1fa0dfe5433800a385e662f5bfd034d742d26f241,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Imag
e:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729892236492648812,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-967pw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdd329aa-b9f0-4233-b2ab-db63265d7d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c4f88f793444a2d9439076836c6e65dada3de3ccbbc96b775346b455c6441a,PodSandboxId:5233b1e985ff5ac08a5f9d47f3e935e21a3d987807624d0a56e395509bc99933,Metadata:&ContainerMetadata{Nam
e:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1729892227397493026,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fcf268d-cd8a-41eb-aeca-eb9e2bd5a187,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2337a9243bcac04881850da120ebdfa6c2851
c259599356474081ac3b2158e43,PodSandboxId:d15dbd0de78cb98a395bc77504bf63eb961a10dcc2ac61f8dbd34f5440e02a6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729892205309269277,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755426f-779c-44a0-9058-958be3222114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6
101327a566b0c3a,PodSandboxId:970d9b785a131bc99ba20c817488b1cbe94db7c280dae9ff9f815efcfd34edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729892201198829203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9tqzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88e7f6a7-96fd-4c16-b0df-4feb71acbfe4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da,PodSandboxId:029a9b9d3282dc2c554858f577819c81515e51f522f39b19c1141a775fcead1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729892198730490551,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a14441-9149-4a18-b5d6-06302835d38b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3,PodSandboxId:db17ada6357554fdef17312bf4e018fe3c8641502b50f47b8148eb0f0ac7f973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729892187158359640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2cf953e687acfe867d6014b05d5d522,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57,PodSandboxId:8f5d32b33d448bc2246a8a59dc93ddc90a60fbdfbeedb2bf901ef7429a002a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1729892187171984825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafba3f19dbe195d5c700a798f157236,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60,PodSandboxId:6a08fd265dc1ef5a831e28b3826b2dbd585a1180ac5f60e131e381f52290f29b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1729892187154286284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49439bc3fb05a292c1410f73bc723b0d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b,PodSandboxId:533122eec836066754f96a6cc9443c062696eeefe6e502b0ff2f6461ed360e56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1729892187146421178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1851d47b017f83b9130a6443875b8957,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99d969ae-a5a1-4102-a679-3496e3873631 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.148134478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8569e0b2-d57f-4fa2-9534-b36e9788423b name=/runtime.v1.RuntimeService/Version
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.148200563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8569e0b2-d57f-4fa2-9534-b36e9788423b name=/runtime.v1.RuntimeService/Version
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.149619852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d39806f-7e3e-4972-99b7-9efa82d7d89d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.151852627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892477151828431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587563,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d39806f-7e3e-4972-99b7-9efa82d7d89d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.152609496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=815933cd-8c56-4131-8894-465967543b16 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.152662680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=815933cd-8c56-4131-8894-465967543b16 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.152970555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:977d398946a4ad83b12143788ef1a6022cd040aa4968cc7dd6642b0f22e6772f,PodSandboxId:9fa681ce0b8a9b34ae6515e73eece8d706c1d65b857230f1ff8347c919bdd8f9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729892337105974621,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6f49bba-8fb4-4037-8ccc-f07fcab0a94d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea98b83c409c23d1000bf585061118a404b4300c4c4a99241371020d65bce89a,PodSandboxId:85115ae19e3d07bf0a5c5dfe9ffd41781512b3eb6d999b46d155d4365054d069,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729892290509733401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a5d80ed-a009-46e7-b426-d6655a8413e2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11734f5213e971ce9e4489134d109c8d51e3da7d4655fbbdb15e52ae97a59784,PodSandboxId:e4a0878424add9df19df4af3655cbd6738f7b02b93833931a120962c3a5acbc9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1729892278943836132,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-8wjnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6cf8d1e-7736-4645-bd6a-66c80211699d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1f6fe97ea36f1e2a110879524c4973699f806b2320bd87507395da1fc958e487,PodSandboxId:f7dbc18556c8b04fed96012d3857eb0efb80da39e61a3efcc5c448d1c331370c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729892260365232188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b5m9g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ee92a599-9841-48e9-982a-d17fc1d13c58,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee1a6d62cd4cf06f7cbfb7314711fd13a251d125f0f49ccf20de06680d8a861,PodSandboxId:c18b4d806f0bbec3bcf793fb957031012c0301ecb03c9fd6dd51b4901e3f6b5e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729892260214731842,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lj6lc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf61e362-4c2b-4774-adce-a0fcaa06c142,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec8292992674b49697d6af126040a9da68ef44df2783c1c4b7861157cf32ad,PodSandboxId:bc1b7713e38da856981f71dc33cb552cd5cbaaa712d91a361e881c48c45808df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729892238523121117,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7drm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dd37623-d67c-48a2-8e11-18a05cd71be2,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238e6338b162c61e6e2a8be58e78ed840f82fafa4d3c248d31cd6bc72b89029,PodSandboxId:c8ec7d7eeb3613a3013e2cd1fa0dfe5433800a385e662f5bfd034d742d26f241,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Imag
e:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729892236492648812,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-967pw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdd329aa-b9f0-4233-b2ab-db63265d7d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c4f88f793444a2d9439076836c6e65dada3de3ccbbc96b775346b455c6441a,PodSandboxId:5233b1e985ff5ac08a5f9d47f3e935e21a3d987807624d0a56e395509bc99933,Metadata:&ContainerMetadata{Nam
e:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1729892227397493026,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fcf268d-cd8a-41eb-aeca-eb9e2bd5a187,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2337a9243bcac04881850da120ebdfa6c2851
c259599356474081ac3b2158e43,PodSandboxId:d15dbd0de78cb98a395bc77504bf63eb961a10dcc2ac61f8dbd34f5440e02a6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729892205309269277,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755426f-779c-44a0-9058-958be3222114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6
101327a566b0c3a,PodSandboxId:970d9b785a131bc99ba20c817488b1cbe94db7c280dae9ff9f815efcfd34edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729892201198829203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9tqzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88e7f6a7-96fd-4c16-b0df-4feb71acbfe4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da,PodSandboxId:029a9b9d3282dc2c554858f577819c81515e51f522f39b19c1141a775fcead1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729892198730490551,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a14441-9149-4a18-b5d6-06302835d38b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3,PodSandboxId:db17ada6357554fdef17312bf4e018fe3c8641502b50f47b8148eb0f0ac7f973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729892187158359640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2cf953e687acfe867d6014b05d5d522,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57,PodSandboxId:8f5d32b33d448bc2246a8a59dc93ddc90a60fbdfbeedb2bf901ef7429a002a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1729892187171984825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafba3f19dbe195d5c700a798f157236,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60,PodSandboxId:6a08fd265dc1ef5a831e28b3826b2dbd585a1180ac5f60e131e381f52290f29b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1729892187154286284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49439bc3fb05a292c1410f73bc723b0d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b,PodSandboxId:533122eec836066754f96a6cc9443c062696eeefe6e502b0ff2f6461ed360e56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1729892187146421178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1851d47b017f83b9130a6443875b8957,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=815933cd-8c56-4131-8894-465967543b16 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.191337652Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31021513-4be8-4c5c-9e1e-c94b1c0c9d48 name=/runtime.v1.RuntimeService/Version
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.191408603Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31021513-4be8-4c5c-9e1e-c94b1c0c9d48 name=/runtime.v1.RuntimeService/Version
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.192440337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6eb13643-40c1-412c-a081-23fdf3d877e1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.193751722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892477193723713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587563,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6eb13643-40c1-412c-a081-23fdf3d877e1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.194274747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69718dfa-da07-40fc-895a-6384a419df73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.194346494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69718dfa-da07-40fc-895a-6384a419df73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.194808904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:977d398946a4ad83b12143788ef1a6022cd040aa4968cc7dd6642b0f22e6772f,PodSandboxId:9fa681ce0b8a9b34ae6515e73eece8d706c1d65b857230f1ff8347c919bdd8f9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729892337105974621,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6f49bba-8fb4-4037-8ccc-f07fcab0a94d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea98b83c409c23d1000bf585061118a404b4300c4c4a99241371020d65bce89a,PodSandboxId:85115ae19e3d07bf0a5c5dfe9ffd41781512b3eb6d999b46d155d4365054d069,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729892290509733401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a5d80ed-a009-46e7-b426-d6655a8413e2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11734f5213e971ce9e4489134d109c8d51e3da7d4655fbbdb15e52ae97a59784,PodSandboxId:e4a0878424add9df19df4af3655cbd6738f7b02b93833931a120962c3a5acbc9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1729892278943836132,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-8wjnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6cf8d1e-7736-4645-bd6a-66c80211699d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1f6fe97ea36f1e2a110879524c4973699f806b2320bd87507395da1fc958e487,PodSandboxId:f7dbc18556c8b04fed96012d3857eb0efb80da39e61a3efcc5c448d1c331370c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729892260365232188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b5m9g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ee92a599-9841-48e9-982a-d17fc1d13c58,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee1a6d62cd4cf06f7cbfb7314711fd13a251d125f0f49ccf20de06680d8a861,PodSandboxId:c18b4d806f0bbec3bcf793fb957031012c0301ecb03c9fd6dd51b4901e3f6b5e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729892260214731842,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lj6lc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf61e362-4c2b-4774-adce-a0fcaa06c142,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec8292992674b49697d6af126040a9da68ef44df2783c1c4b7861157cf32ad,PodSandboxId:bc1b7713e38da856981f71dc33cb552cd5cbaaa712d91a361e881c48c45808df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729892238523121117,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7drm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dd37623-d67c-48a2-8e11-18a05cd71be2,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238e6338b162c61e6e2a8be58e78ed840f82fafa4d3c248d31cd6bc72b89029,PodSandboxId:c8ec7d7eeb3613a3013e2cd1fa0dfe5433800a385e662f5bfd034d742d26f241,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Imag
e:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729892236492648812,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-967pw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdd329aa-b9f0-4233-b2ab-db63265d7d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c4f88f793444a2d9439076836c6e65dada3de3ccbbc96b775346b455c6441a,PodSandboxId:5233b1e985ff5ac08a5f9d47f3e935e21a3d987807624d0a56e395509bc99933,Metadata:&ContainerMetadata{Nam
e:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1729892227397493026,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fcf268d-cd8a-41eb-aeca-eb9e2bd5a187,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2337a9243bcac04881850da120ebdfa6c2851
c259599356474081ac3b2158e43,PodSandboxId:d15dbd0de78cb98a395bc77504bf63eb961a10dcc2ac61f8dbd34f5440e02a6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729892205309269277,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755426f-779c-44a0-9058-958be3222114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6
101327a566b0c3a,PodSandboxId:970d9b785a131bc99ba20c817488b1cbe94db7c280dae9ff9f815efcfd34edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729892201198829203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9tqzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88e7f6a7-96fd-4c16-b0df-4feb71acbfe4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da,PodSandboxId:029a9b9d3282dc2c554858f577819c81515e51f522f39b19c1141a775fcead1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729892198730490551,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a14441-9149-4a18-b5d6-06302835d38b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3,PodSandboxId:db17ada6357554fdef17312bf4e018fe3c8641502b50f47b8148eb0f0ac7f973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729892187158359640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2cf953e687acfe867d6014b05d5d522,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57,PodSandboxId:8f5d32b33d448bc2246a8a59dc93ddc90a60fbdfbeedb2bf901ef7429a002a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1729892187171984825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafba3f19dbe195d5c700a798f157236,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60,PodSandboxId:6a08fd265dc1ef5a831e28b3826b2dbd585a1180ac5f60e131e381f52290f29b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1729892187154286284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49439bc3fb05a292c1410f73bc723b0d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b,PodSandboxId:533122eec836066754f96a6cc9443c062696eeefe6e502b0ff2f6461ed360e56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1729892187146421178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1851d47b017f83b9130a6443875b8957,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69718dfa-da07-40fc-895a-6384a419df73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.229448897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f809051-a18f-4443-ba46-2f3a62ffd496 name=/runtime.v1.RuntimeService/Version
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.229525854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f809051-a18f-4443-ba46-2f3a62ffd496 name=/runtime.v1.RuntimeService/Version
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.230923995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff899201-0fef-41b4-bb23-4ff20e594821 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.232071770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892477232046825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587563,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff899201-0fef-41b4-bb23-4ff20e594821 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.232665557Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00d65459-3b49-4d7f-a304-2a68ce4abda7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.232774680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00d65459-3b49-4d7f-a304-2a68ce4abda7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:41:17 addons-413632 crio[661]: time="2024-10-25 21:41:17.233094012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:977d398946a4ad83b12143788ef1a6022cd040aa4968cc7dd6642b0f22e6772f,PodSandboxId:9fa681ce0b8a9b34ae6515e73eece8d706c1d65b857230f1ff8347c919bdd8f9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729892337105974621,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6f49bba-8fb4-4037-8ccc-f07fcab0a94d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea98b83c409c23d1000bf585061118a404b4300c4c4a99241371020d65bce89a,PodSandboxId:85115ae19e3d07bf0a5c5dfe9ffd41781512b3eb6d999b46d155d4365054d069,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729892290509733401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a5d80ed-a009-46e7-b426-d6655a8413e2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11734f5213e971ce9e4489134d109c8d51e3da7d4655fbbdb15e52ae97a59784,PodSandboxId:e4a0878424add9df19df4af3655cbd6738f7b02b93833931a120962c3a5acbc9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1729892278943836132,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-8wjnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f6cf8d1e-7736-4645-bd6a-66c80211699d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1f6fe97ea36f1e2a110879524c4973699f806b2320bd87507395da1fc958e487,PodSandboxId:f7dbc18556c8b04fed96012d3857eb0efb80da39e61a3efcc5c448d1c331370c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729892260365232188,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-b5m9g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ee92a599-9841-48e9-982a-d17fc1d13c58,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ee1a6d62cd4cf06f7cbfb7314711fd13a251d125f0f49ccf20de06680d8a861,PodSandboxId:c18b4d806f0bbec3bcf793fb957031012c0301ecb03c9fd6dd51b4901e3f6b5e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1729892260214731842,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lj6lc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf61e362-4c2b-4774-adce-a0fcaa06c142,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec8292992674b49697d6af126040a9da68ef44df2783c1c4b7861157cf32ad,PodSandboxId:bc1b7713e38da856981f71dc33cb552cd5cbaaa712d91a361e881c48c45808df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729892238523121117,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7drm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9dd37623-d67c-48a2-8e11-18a05cd71be2,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238e6338b162c61e6e2a8be58e78ed840f82fafa4d3c248d31cd6bc72b89029,PodSandboxId:c8ec7d7eeb3613a3013e2cd1fa0dfe5433800a385e662f5bfd034d742d26f241,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Imag
e:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729892236492648812,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-967pw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdd329aa-b9f0-4233-b2ab-db63265d7d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27c4f88f793444a2d9439076836c6e65dada3de3ccbbc96b775346b455c6441a,PodSandboxId:5233b1e985ff5ac08a5f9d47f3e935e21a3d987807624d0a56e395509bc99933,Metadata:&ContainerMetadata{Nam
e:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1729892227397493026,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fcf268d-cd8a-41eb-aeca-eb9e2bd5a187,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2337a9243bcac04881850da120ebdfa6c2851
c259599356474081ac3b2158e43,PodSandboxId:d15dbd0de78cb98a395bc77504bf63eb961a10dcc2ac61f8dbd34f5440e02a6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729892205309269277,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755426f-779c-44a0-9058-958be3222114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6
101327a566b0c3a,PodSandboxId:970d9b785a131bc99ba20c817488b1cbe94db7c280dae9ff9f815efcfd34edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729892201198829203,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9tqzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88e7f6a7-96fd-4c16-b0df-4feb71acbfe4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da,PodSandboxId:029a9b9d3282dc2c554858f577819c81515e51f522f39b19c1141a775fcead1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729892198730490551,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a14441-9149-4a18-b5d6-06302835d38b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3,PodSandboxId:db17ada6357554fdef17312bf4e018fe3c8641502b50f47b8148eb0f0ac7f973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729892187158359640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2cf953e687acfe867d6014b05d5d522,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57,PodSandboxId:8f5d32b33d448bc2246a8a59dc93ddc90a60fbdfbeedb2bf901ef7429a002a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1729892187171984825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafba3f19dbe195d5c700a798f157236,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.
pod.terminationGracePeriod: 30,},},&Container{Id:3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60,PodSandboxId:6a08fd265dc1ef5a831e28b3826b2dbd585a1180ac5f60e131e381f52290f29b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1729892187154286284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49439bc3fb05a292c1410f73bc723b0d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b,PodSandboxId:533122eec836066754f96a6cc9443c062696eeefe6e502b0ff2f6461ed360e56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1729892187146421178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1851d47b017f83b9130a6443875b8957,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.ter
minationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00d65459-3b49-4d7f-a304-2a68ce4abda7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	977d398946a4a       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   9fa681ce0b8a9       nginx
	ea98b83c409c2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   85115ae19e3d0       busybox
	11734f5213e97       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   e4a0878424add       ingress-nginx-controller-5f85ff4588-8wjnh
	1f6fe97ea36f1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   f7dbc18556c8b       ingress-nginx-admission-patch-b5m9g
	9ee1a6d62cd4c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   c18b4d806f0bb       ingress-nginx-admission-create-lj6lc
	baec829299267       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        3 minutes ago       Running             metrics-server            0                   bc1b7713e38da       metrics-server-84c5f94fbc-7drm7
	6238e6338b162       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   c8ec7d7eeb361       amd-gpu-device-plugin-967pw
	27c4f88f79344       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   5233b1e985ff5       kube-ingress-dns-minikube
	2337a9243bcac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   d15dbd0de78cb       storage-provisioner
	5de2757df2702       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   970d9b785a131       coredns-7c65d6cfc9-9tqzw
	641998da4d5c9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             4 minutes ago       Running             kube-proxy                0                   029a9b9d3282d       kube-proxy-jg272
	db634e56fb345       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             4 minutes ago       Running             kube-scheduler            0                   8f5d32b33d448       kube-scheduler-addons-413632
	7255e811190fe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago       Running             etcd                      0                   db17ada635755       etcd-addons-413632
	3fbd91729a37e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             4 minutes ago       Running             kube-controller-manager   0                   6a08fd265dc1e       kube-controller-manager-addons-413632
	ae973967b4de2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             4 minutes ago       Running             kube-apiserver            0                   533122eec8360       kube-apiserver-addons-413632
	
	
	==> coredns [5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6101327a566b0c3a] <==
	[INFO] 10.244.0.8:33377 - 45556 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000097035s
	[INFO] 10.244.0.8:33377 - 26788 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000130875s
	[INFO] 10.244.0.8:33377 - 40748 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000158558s
	[INFO] 10.244.0.8:33377 - 36576 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000133929s
	[INFO] 10.244.0.8:33377 - 64716 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000111526s
	[INFO] 10.244.0.8:33377 - 11732 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000125272s
	[INFO] 10.244.0.8:33377 - 19024 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000070418s
	[INFO] 10.244.0.8:47985 - 23803 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096087s
	[INFO] 10.244.0.8:47985 - 23505 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000033617s
	[INFO] 10.244.0.8:51761 - 56075 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100589s
	[INFO] 10.244.0.8:51761 - 55810 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049281s
	[INFO] 10.244.0.8:42462 - 38957 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052283s
	[INFO] 10.244.0.8:42462 - 38721 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057939s
	[INFO] 10.244.0.8:46939 - 16877 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055559s
	[INFO] 10.244.0.8:46939 - 16451 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000038314s
	[INFO] 10.244.0.23:54352 - 30047 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00057872s
	[INFO] 10.244.0.23:59060 - 38543 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000078637s
	[INFO] 10.244.0.23:37366 - 2193 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011395s
	[INFO] 10.244.0.23:59728 - 58763 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119659s
	[INFO] 10.244.0.23:51732 - 21797 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108226s
	[INFO] 10.244.0.23:37818 - 24452 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118903s
	[INFO] 10.244.0.23:53946 - 37588 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000895837s
	[INFO] 10.244.0.23:50297 - 4685 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00126926s
	[INFO] 10.244.0.27:37833 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000848762s
	[INFO] 10.244.0.27:38467 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00032488s
	
	
	==> describe nodes <==
	Name:               addons-413632
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-413632
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc
	                    minikube.k8s.io/name=addons-413632
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_25T21_36_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-413632
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Oct 2024 21:36:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-413632
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Oct 2024 21:41:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Oct 2024 21:39:37 +0000   Fri, 25 Oct 2024 21:36:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Oct 2024 21:39:37 +0000   Fri, 25 Oct 2024 21:36:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Oct 2024 21:39:37 +0000   Fri, 25 Oct 2024 21:36:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 25 Oct 2024 21:39:37 +0000   Fri, 25 Oct 2024 21:36:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.223
	  Hostname:    addons-413632
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a839ce67ffa94184a398d8242d28429c
	  System UUID:                a839ce67-ffa9-4184-a398-d8242d28429c
	  Boot ID:                    482464d4-1bf2-4223-a91e-3e78b95a75f5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  default                     hello-world-app-55bf9c44b4-n7dj7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-8wjnh    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m32s
	  kube-system                 amd-gpu-device-plugin-967pw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 coredns-7c65d6cfc9-9tqzw                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m40s
	  kube-system                 etcd-addons-413632                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m45s
	  kube-system                 kube-apiserver-addons-413632                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-controller-manager-addons-413632        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-proxy-jg272                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-scheduler-addons-413632                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 metrics-server-84c5f94fbc-7drm7              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m37s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m45s  kubelet          Node addons-413632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s  kubelet          Node addons-413632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s  kubelet          Node addons-413632 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m44s  kubelet          Node addons-413632 status is now: NodeReady
	  Normal  RegisteredNode           4m41s  node-controller  Node addons-413632 event: Registered Node addons-413632 in Controller
	
	
	==> dmesg <==
	[  +0.082806] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.334409] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.111192] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.135771] kauditd_printk_skb: 118 callbacks suppressed
	[  +5.335463] kauditd_printk_skb: 151 callbacks suppressed
	[  +8.162051] kauditd_printk_skb: 66 callbacks suppressed
	[Oct25 21:37] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.283563] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.547271] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.723041] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.064607] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.776111] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.164873] kauditd_printk_skb: 2 callbacks suppressed
	[Oct25 21:38] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.256100] kauditd_printk_skb: 4 callbacks suppressed
	[ +20.869470] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.353513] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.940086] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.105746] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.042848] kauditd_printk_skb: 25 callbacks suppressed
	[Oct25 21:39] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.589151] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.918248] kauditd_printk_skb: 2 callbacks suppressed
	[Oct25 21:40] kauditd_printk_skb: 7 callbacks suppressed
	[Oct25 21:41] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3] <==
	{"level":"info","ts":"2024-10-25T21:37:43.932885Z","caller":"traceutil/trace.go:171","msg":"trace[1390579874] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1011; }","duration":"259.764467ms","start":"2024-10-25T21:37:43.673112Z","end":"2024-10-25T21:37:43.932876Z","steps":["trace[1390579874] 'agreement among raft nodes before linearized reading'  (duration: 259.668882ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-25T21:37:58.794096Z","caller":"traceutil/trace.go:171","msg":"trace[1625734794] linearizableReadLoop","detail":"{readStateIndex:1124; appliedIndex:1123; }","duration":"343.452974ms","start":"2024-10-25T21:37:58.450627Z","end":"2024-10-25T21:37:58.794080Z","steps":["trace[1625734794] 'read index received'  (duration: 343.278344ms)","trace[1625734794] 'applied index is now lower than readState.Index'  (duration: 173.781µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-25T21:37:58.794364Z","caller":"traceutil/trace.go:171","msg":"trace[1207978474] transaction","detail":"{read_only:false; response_revision:1090; number_of_response:1; }","duration":"439.500973ms","start":"2024-10-25T21:37:58.354854Z","end":"2024-10-25T21:37:58.794355Z","steps":["trace[1207978474] 'process raft request'  (duration: 439.072971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.794442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.42489ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:37:58.794501Z","caller":"traceutil/trace.go:171","msg":"trace[788931765] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1090; }","duration":"302.505472ms","start":"2024-10-25T21:37:58.491987Z","end":"2024-10-25T21:37:58.794493Z","steps":["trace[788931765] 'agreement among raft nodes before linearized reading'  (duration: 302.397336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.794525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-25T21:37:58.491956Z","time spent":"302.563571ms","remote":"127.0.0.1:51868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-25T21:37:58.794702Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.387422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:37:58.794743Z","caller":"traceutil/trace.go:171","msg":"trace[867772007] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1090; }","duration":"121.429317ms","start":"2024-10-25T21:37:58.673307Z","end":"2024-10-25T21:37:58.794737Z","steps":["trace[867772007] 'agreement among raft nodes before linearized reading'  (duration: 121.34698ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.794778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.284061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:37:58.794867Z","caller":"traceutil/trace.go:171","msg":"trace[386388254] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1090; }","duration":"265.377125ms","start":"2024-10-25T21:37:58.529483Z","end":"2024-10-25T21:37:58.794860Z","steps":["trace[386388254] 'agreement among raft nodes before linearized reading'  (duration: 265.267993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.794477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-25T21:37:58.354840Z","time spent":"439.571001ms","remote":"127.0.0.1:51846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1089 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-25T21:37:58.794998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.365895ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:37:58.795254Z","caller":"traceutil/trace.go:171","msg":"trace[1795563616] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1090; }","duration":"344.622333ms","start":"2024-10-25T21:37:58.450623Z","end":"2024-10-25T21:37:58.795245Z","steps":["trace[1795563616] 'agreement among raft nodes before linearized reading'  (duration: 344.347378ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.795303Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-25T21:37:58.450538Z","time spent":"344.756037ms","remote":"127.0.0.1:51868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-25T21:38:08.978112Z","caller":"traceutil/trace.go:171","msg":"trace[1296786523] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"130.212149ms","start":"2024-10-25T21:38:08.847880Z","end":"2024-10-25T21:38:08.978092Z","steps":["trace[1296786523] 'process raft request'  (duration: 130.097075ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-25T21:38:34.215899Z","caller":"traceutil/trace.go:171","msg":"trace[567018480] transaction","detail":"{read_only:false; response_revision:1297; number_of_response:1; }","duration":"173.774915ms","start":"2024-10-25T21:38:34.042109Z","end":"2024-10-25T21:38:34.215884Z","steps":["trace[567018480] 'process raft request'  (duration: 173.522387ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-25T21:38:37.374066Z","caller":"traceutil/trace.go:171","msg":"trace[2028133464] linearizableReadLoop","detail":"{readStateIndex:1346; appliedIndex:1345; }","duration":"220.868782ms","start":"2024-10-25T21:38:37.153182Z","end":"2024-10-25T21:38:37.374050Z","steps":["trace[2028133464] 'read index received'  (duration: 220.732368ms)","trace[2028133464] 'applied index is now lower than readState.Index'  (duration: 135.997µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-25T21:38:37.374398Z","caller":"traceutil/trace.go:171","msg":"trace[1921315016] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1302; }","duration":"308.973694ms","start":"2024-10-25T21:38:37.065414Z","end":"2024-10-25T21:38:37.374388Z","steps":["trace[1921315016] 'process raft request'  (duration: 308.53717ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:38:37.374657Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-25T21:38:37.065401Z","time spent":"309.058057ms","remote":"127.0.0.1:52116","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":67,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:855 > success:<request_delete_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > > failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >"}
	{"level":"warn","ts":"2024-10-25T21:38:37.374885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.716854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-25T21:38:37.374939Z","caller":"traceutil/trace.go:171","msg":"trace[1144617021] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1302; }","duration":"221.769929ms","start":"2024-10-25T21:38:37.153160Z","end":"2024-10-25T21:38:37.374929Z","steps":["trace[1144617021] 'agreement among raft nodes before linearized reading'  (duration: 221.654793ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:38:37.375158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.440807ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:38:37.375199Z","caller":"traceutil/trace.go:171","msg":"trace[2051312921] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1302; }","duration":"139.483807ms","start":"2024-10-25T21:38:37.235709Z","end":"2024-10-25T21:38:37.375192Z","steps":["trace[2051312921] 'agreement among raft nodes before linearized reading'  (duration: 139.431263ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:38:37.375646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.378347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:38:37.375742Z","caller":"traceutil/trace.go:171","msg":"trace[657013752] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1302; }","duration":"120.457537ms","start":"2024-10-25T21:38:37.255258Z","end":"2024-10-25T21:38:37.375716Z","steps":["trace[657013752] 'agreement among raft nodes before linearized reading'  (duration: 120.366026ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:41:17 up 5 min,  0 users,  load average: 0.62, 1.00, 0.52
	Linux addons-413632 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b] <==
	 > logger="UnhandledError"
	E1025 21:38:29.245797       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.151.10:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.151.10:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1025 21:38:29.280480       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 21:38:29.291769       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1025 21:38:31.176471       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.124.101"}
	I1025 21:38:54.327901       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 21:38:54.507502       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.50.55"}
	I1025 21:38:56.751256       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1025 21:38:57.779658       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1025 21:39:06.208970       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1025 21:39:40.046774       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 21:40:10.134409       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.134493       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:40:10.172449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.172512       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:40:10.204091       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.204154       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:40:10.207473       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.207525       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:40:10.226480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.226534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1025 21:40:11.209336       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1025 21:40:11.226861       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1025 21:40:11.341033       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1025 21:41:16.031017       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.10.223"}
	
	
	==> kube-controller-manager [3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60] <==
	W1025 21:40:20.410864       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:40:20.410980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:40:26.740217       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:40:26.740296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:40:29.746371       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:40:29.746539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:40:30.160388       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:40:30.160454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:40:36.352430       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:40:36.352543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1025 21:40:37.030632       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1025 21:40:37.030684       1 shared_informer.go:320] Caches are synced for resource quota
	I1025 21:40:37.516044       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1025 21:40:37.516261       1 shared_informer.go:320] Caches are synced for garbage collector
	W1025 21:40:44.190016       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:40:44.190070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:40:44.420650       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:40:44.420821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:40:48.611158       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:40:48.611206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1025 21:41:15.871408       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="48.931635ms"
	I1025 21:41:15.882444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.994059ms"
	I1025 21:41:15.882504       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.745µs"
	W1025 21:41:16.548451       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:41:16.548491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1025 21:36:39.590928       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1025 21:36:39.650217       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.223"]
	E1025 21:36:39.650402       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 21:36:39.970750       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1025 21:36:39.970779       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 21:36:39.970801       1 server_linux.go:169] "Using iptables Proxier"
	I1025 21:36:40.039085       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 21:36:40.048206       1 server.go:483] "Version info" version="v1.31.1"
	I1025 21:36:40.048226       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:36:40.056443       1 config.go:199] "Starting service config controller"
	I1025 21:36:40.082764       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1025 21:36:40.082884       1 config.go:105] "Starting endpoint slice config controller"
	I1025 21:36:40.082892       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1025 21:36:40.084062       1 config.go:328] "Starting node config controller"
	I1025 21:36:40.084072       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1025 21:36:40.184065       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1025 21:36:40.184103       1 shared_informer.go:320] Caches are synced for service config
	I1025 21:36:40.184286       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57] <==
	E1025 21:36:30.012135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.011930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 21:36:30.012154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.011989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:36:30.012189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1025 21:36:30.010343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.012775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 21:36:30.012813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.833504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:36:30.833596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.899934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:36:30.900868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.923996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 21:36:30.924090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.962092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:36:30.962263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.997750       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 21:36:30.997803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:31.053119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:36:31.053179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:31.077646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:36:31.077681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:31.123767       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:36:31.123861       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1025 21:36:33.395251       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 21:41:13 addons-413632 kubelet[1209]: E1025 21:41:13.091042    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892473089349416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587563,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:41:13 addons-413632 kubelet[1209]: E1025 21:41:13.091450    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892473089349416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587563,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855059    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="73b07a02-551a-4a03-b0f4-a0f1d7dde2b5" containerName="volume-snapshot-controller"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855385    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cfcf8f38-ae62-4726-9acd-d9813a6a11e0" containerName="volume-snapshot-controller"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855426    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0a815931-e689-4cde-b86e-48ce8d155a06" containerName="csi-attacher"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855459    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c846fcd-cd87-4881-9639-e90cd9c3c640" containerName="task-pv-container"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855493    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="csi-external-health-monitor-controller"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855524    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="liveness-probe"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855619    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="node-driver-registrar"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855668    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="csi-snapshotter"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855701    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="hostpath"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855736    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="csi-provisioner"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: E1025 21:41:15.855768    1209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9c13546-2b70-4d29-a94b-c906bb7cab5e" containerName="csi-resizer"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.855850    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfcf8f38-ae62-4726-9acd-d9813a6a11e0" containerName="volume-snapshot-controller"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.855882    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="hostpath"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.855912    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="csi-external-health-monitor-controller"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.855945    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c846fcd-cd87-4881-9639-e90cd9c3c640" containerName="task-pv-container"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.855984    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c13546-2b70-4d29-a94b-c906bb7cab5e" containerName="csi-resizer"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.856014    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="csi-provisioner"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.856043    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="csi-snapshotter"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.856081    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="node-driver-registrar"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.856111    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a815931-e689-4cde-b86e-48ce8d155a06" containerName="csi-attacher"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.856140    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b07a02-551a-4a03-b0f4-a0f1d7dde2b5" containerName="volume-snapshot-controller"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.856170    1209 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7167c1-6de0-4a01-b052-10f732186a02" containerName="liveness-probe"
	Oct 25 21:41:15 addons-413632 kubelet[1209]: I1025 21:41:15.934916    1209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz5vn\" (UniqueName: \"kubernetes.io/projected/375477b2-b00d-4105-be11-b2caab094c85-kube-api-access-hz5vn\") pod \"hello-world-app-55bf9c44b4-n7dj7\" (UID: \"375477b2-b00d-4105-be11-b2caab094c85\") " pod="default/hello-world-app-55bf9c44b4-n7dj7"
	
	
	==> storage-provisioner [2337a9243bcac04881850da120ebdfa6c2851c259599356474081ac3b2158e43] <==
	I1025 21:36:46.264394       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:36:46.307767       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:36:46.307964       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:36:46.329987       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:36:46.330159       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-413632_cc3b9645-f160-4b7d-8628-c6fbb3b1c134!
	I1025 21:36:46.333321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cafad7de-61e1-438f-87d5-43ad3584c8ce", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-413632_cc3b9645-f160-4b7d-8628-c6fbb3b1c134 became leader
	I1025 21:36:46.434679       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-413632_cc3b9645-f160-4b7d-8628-c6fbb3b1c134!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-413632 -n addons-413632
helpers_test.go:261: (dbg) Run:  kubectl --context addons-413632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-n7dj7 ingress-nginx-admission-create-lj6lc ingress-nginx-admission-patch-b5m9g
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-413632 describe pod hello-world-app-55bf9c44b4-n7dj7 ingress-nginx-admission-create-lj6lc ingress-nginx-admission-patch-b5m9g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-413632 describe pod hello-world-app-55bf9c44b4-n7dj7 ingress-nginx-admission-create-lj6lc ingress-nginx-admission-patch-b5m9g: exit status 1 (67.936094ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-n7dj7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-413632/192.168.39.223
	Start Time:       Fri, 25 Oct 2024 21:41:15 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hz5vn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hz5vn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-n7dj7 to addons-413632
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lj6lc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b5m9g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-413632 describe pod hello-world-app-55bf9c44b4-n7dj7 ingress-nginx-admission-create-lj6lc ingress-nginx-admission-patch-b5m9g: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 addons disable ingress-dns --alsologtostderr -v=1: (1.384007099s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 addons disable ingress --alsologtostderr -v=1: (7.712675784s)
--- FAIL: TestAddons/parallel/Ingress (153.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (363.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.839534ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-7drm7" [9dd37623-d67c-48a2-8e11-18a05cd71be2] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003884808s
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (80.267051ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-413632, age: 2m4.444182654s

                                                
                                                
** /stderr **
I1025 21:38:36.446794  669177 retry.go:31] will retry after 4.200967738s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (68.693316ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 2m1.715388798s

                                                
                                                
** /stderr **
I1025 21:38:40.717637  669177 retry.go:31] will retry after 4.935349185s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (75.787916ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 2m6.727096186s

                                                
                                                
** /stderr **
I1025 21:38:45.729667  669177 retry.go:31] will retry after 4.805507926s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (70.940184ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 2m11.603088654s

                                                
                                                
** /stderr **
I1025 21:38:50.606551  669177 retry.go:31] will retry after 7.742404984s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (68.952515ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 2m19.415834305s

                                                
                                                
** /stderr **
I1025 21:38:58.418281  669177 retry.go:31] will retry after 17.19015358s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (69.67754ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 2m36.676813304s

                                                
                                                
** /stderr **
I1025 21:39:15.679375  669177 retry.go:31] will retry after 28.921760523s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (67.845339ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 3m5.667267308s

                                                
                                                
** /stderr **
I1025 21:39:44.670178  669177 retry.go:31] will retry after 37.677662463s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (66.41006ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 3m43.413979401s

                                                
                                                
** /stderr **
I1025 21:40:22.416593  669177 retry.go:31] will retry after 1m14.619011427s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (68.188242ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 4m58.101949717s

                                                
                                                
** /stderr **
I1025 21:41:37.104420  669177 retry.go:31] will retry after 1m18.161059889s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (66.383388ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 6m16.33392247s

                                                
                                                
** /stderr **
I1025 21:42:55.337329  669177 retry.go:31] will retry after 37.164770035s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (70.277703ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 6m53.571573519s

                                                
                                                
** /stderr **
I1025 21:43:32.574443  669177 retry.go:31] will retry after 58.381462736s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-413632 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-413632 top pods -n kube-system: exit status 1 (70.88878ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-967pw, age: 7m52.024772127s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-413632 -n addons-413632
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 logs -n 25: (1.194679489s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-719988                                                                     | download-only-719988 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| delete  | -p download-only-941359                                                                     | download-only-941359 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-275962 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC |                     |
	|         | binary-mirror-275962                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41967                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-275962                                                                     | binary-mirror-275962 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| addons  | disable dashboard -p                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC |                     |
	|         | addons-413632                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC |                     |
	|         | addons-413632                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-413632 --wait=true                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:38 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | -p addons-413632                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-413632 ip                                                                            | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-413632 ssh cat                                                                       | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | /opt/local-path-provisioner/pvc-635e1fba-296d-4aed-ae47-8b59b1722843_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:39 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:38 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:38 UTC | 25 Oct 24 21:39 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-413632 ssh curl -s                                                                   | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:39 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:39 UTC | 25 Oct 24 21:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:40 UTC | 25 Oct 24 21:40 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-413632 addons                                                                        | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:40 UTC | 25 Oct 24 21:40 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-413632 ip                                                                            | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:41 UTC | 25 Oct 24 21:41 UTC |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:41 UTC | 25 Oct 24 21:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-413632 addons disable                                                                | addons-413632        | jenkins | v1.34.0 | 25 Oct 24 21:41 UTC | 25 Oct 24 21:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 21:35:51
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:35:51.195159  669884 out.go:345] Setting OutFile to fd 1 ...
	I1025 21:35:51.195418  669884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:51.195428  669884 out.go:358] Setting ErrFile to fd 2...
	I1025 21:35:51.195432  669884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:51.195588  669884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 21:35:51.196193  669884 out.go:352] Setting JSON to false
	I1025 21:35:51.197134  669884 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":15495,"bootTime":1729876656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:35:51.197240  669884 start.go:139] virtualization: kvm guest
	I1025 21:35:51.199531  669884 out.go:177] * [addons-413632] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:35:51.201023  669884 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 21:35:51.201015  669884 notify.go:220] Checking for updates...
	I1025 21:35:51.202508  669884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:35:51.203791  669884 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 21:35:51.205122  669884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:35:51.206471  669884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:35:51.207687  669884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:35:51.209343  669884 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 21:35:51.241077  669884 out.go:177] * Using the kvm2 driver based on user configuration
	I1025 21:35:51.242444  669884 start.go:297] selected driver: kvm2
	I1025 21:35:51.242456  669884 start.go:901] validating driver "kvm2" against <nil>
	I1025 21:35:51.242468  669884 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:35:51.243140  669884 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:51.243228  669884 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 21:35:51.258156  669884 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 21:35:51.258213  669884 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 21:35:51.258510  669884 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:35:51.258544  669884 cni.go:84] Creating CNI manager for ""
	I1025 21:35:51.258609  669884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 21:35:51.258622  669884 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 21:35:51.258690  669884 start.go:340] cluster config:
	{Name:addons-413632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 21:35:51.258834  669884 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:51.260726  669884 out.go:177] * Starting "addons-413632" primary control-plane node in "addons-413632" cluster
	I1025 21:35:51.261988  669884 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 21:35:51.262038  669884 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1025 21:35:51.262052  669884 cache.go:56] Caching tarball of preloaded images
	I1025 21:35:51.262141  669884 preload.go:172] Found /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 21:35:51.262157  669884 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1025 21:35:51.262497  669884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/config.json ...
	I1025 21:35:51.262522  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/config.json: {Name:mkca788804c24b7c5ae7d3793d37c40c7bc3ab83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:35:51.262701  669884 start.go:360] acquireMachinesLock for addons-413632: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 21:35:51.262764  669884 start.go:364] duration metric: took 45.057µs to acquireMachinesLock for "addons-413632"
	I1025 21:35:51.262791  669884 start.go:93] Provisioning new machine with config: &{Name:addons-413632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:35:51.262856  669884 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 21:35:51.264370  669884 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1025 21:35:51.264520  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:35:51.264564  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:35:51.278817  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34097
	I1025 21:35:51.279255  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:35:51.279958  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:35:51.279982  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:35:51.280325  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:35:51.280507  669884 main.go:141] libmachine: (addons-413632) Calling .GetMachineName
	I1025 21:35:51.280659  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:35:51.280837  669884 start.go:159] libmachine.API.Create for "addons-413632" (driver="kvm2")
	I1025 21:35:51.280867  669884 client.go:168] LocalClient.Create starting
	I1025 21:35:51.280900  669884 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem
	I1025 21:35:51.462035  669884 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem
	I1025 21:35:51.517275  669884 main.go:141] libmachine: Running pre-create checks...
	I1025 21:35:51.517299  669884 main.go:141] libmachine: (addons-413632) Calling .PreCreateCheck
	I1025 21:35:51.517809  669884 main.go:141] libmachine: (addons-413632) Calling .GetConfigRaw
	I1025 21:35:51.518323  669884 main.go:141] libmachine: Creating machine...
	I1025 21:35:51.518342  669884 main.go:141] libmachine: (addons-413632) Calling .Create
	I1025 21:35:51.518493  669884 main.go:141] libmachine: (addons-413632) creating KVM machine...
	I1025 21:35:51.518513  669884 main.go:141] libmachine: (addons-413632) creating network...
	I1025 21:35:51.519781  669884 main.go:141] libmachine: (addons-413632) DBG | found existing default KVM network
	I1025 21:35:51.520575  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:51.520430  669906 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1025 21:35:51.520606  669884 main.go:141] libmachine: (addons-413632) DBG | created network xml: 
	I1025 21:35:51.520615  669884 main.go:141] libmachine: (addons-413632) DBG | <network>
	I1025 21:35:51.520622  669884 main.go:141] libmachine: (addons-413632) DBG |   <name>mk-addons-413632</name>
	I1025 21:35:51.520627  669884 main.go:141] libmachine: (addons-413632) DBG |   <dns enable='no'/>
	I1025 21:35:51.520632  669884 main.go:141] libmachine: (addons-413632) DBG |   
	I1025 21:35:51.520640  669884 main.go:141] libmachine: (addons-413632) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1025 21:35:51.520647  669884 main.go:141] libmachine: (addons-413632) DBG |     <dhcp>
	I1025 21:35:51.520660  669884 main.go:141] libmachine: (addons-413632) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1025 21:35:51.520675  669884 main.go:141] libmachine: (addons-413632) DBG |     </dhcp>
	I1025 21:35:51.520689  669884 main.go:141] libmachine: (addons-413632) DBG |   </ip>
	I1025 21:35:51.520733  669884 main.go:141] libmachine: (addons-413632) DBG |   
	I1025 21:35:51.520762  669884 main.go:141] libmachine: (addons-413632) DBG | </network>
	I1025 21:35:51.520790  669884 main.go:141] libmachine: (addons-413632) DBG | 
	I1025 21:35:51.525979  669884 main.go:141] libmachine: (addons-413632) DBG | trying to create private KVM network mk-addons-413632 192.168.39.0/24...
	I1025 21:35:51.594310  669884 main.go:141] libmachine: (addons-413632) DBG | private KVM network mk-addons-413632 192.168.39.0/24 created
	I1025 21:35:51.594349  669884 main.go:141] libmachine: (addons-413632) setting up store path in /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632 ...
	I1025 21:35:51.594362  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:51.594305  669906 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:35:51.594381  669884 main.go:141] libmachine: (addons-413632) building disk image from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1025 21:35:51.594569  669884 main.go:141] libmachine: (addons-413632) Downloading /home/jenkins/minikube-integration/19758-661979/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1025 21:35:51.884182  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:51.884040  669906 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa...
	I1025 21:35:52.005446  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:52.005271  669906 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/addons-413632.rawdisk...
	I1025 21:35:52.005487  669884 main.go:141] libmachine: (addons-413632) DBG | Writing magic tar header
	I1025 21:35:52.005523  669884 main.go:141] libmachine: (addons-413632) DBG | Writing SSH key tar header
	I1025 21:35:52.005533  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632 (perms=drwx------)
	I1025 21:35:52.005541  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:52.005387  669906 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632 ...
	I1025 21:35:52.005554  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632
	I1025 21:35:52.005566  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines (perms=drwxr-xr-x)
	I1025 21:35:52.005594  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines
	I1025 21:35:52.005605  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube (perms=drwxr-xr-x)
	I1025 21:35:52.005612  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:35:52.005620  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979
	I1025 21:35:52.005626  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1025 21:35:52.005631  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home/jenkins
	I1025 21:35:52.005639  669884 main.go:141] libmachine: (addons-413632) DBG | checking permissions on dir: /home
	I1025 21:35:52.005660  669884 main.go:141] libmachine: (addons-413632) DBG | skipping /home - not owner
	I1025 21:35:52.005676  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration/19758-661979 (perms=drwxrwxr-x)
	I1025 21:35:52.005687  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 21:35:52.005696  669884 main.go:141] libmachine: (addons-413632) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 21:35:52.005735  669884 main.go:141] libmachine: (addons-413632) creating domain...
	I1025 21:35:52.006986  669884 main.go:141] libmachine: (addons-413632) define libvirt domain using xml: 
	I1025 21:35:52.007001  669884 main.go:141] libmachine: (addons-413632) <domain type='kvm'>
	I1025 21:35:52.007008  669884 main.go:141] libmachine: (addons-413632)   <name>addons-413632</name>
	I1025 21:35:52.007016  669884 main.go:141] libmachine: (addons-413632)   <memory unit='MiB'>4000</memory>
	I1025 21:35:52.007031  669884 main.go:141] libmachine: (addons-413632)   <vcpu>2</vcpu>
	I1025 21:35:52.007040  669884 main.go:141] libmachine: (addons-413632)   <features>
	I1025 21:35:52.007050  669884 main.go:141] libmachine: (addons-413632)     <acpi/>
	I1025 21:35:52.007056  669884 main.go:141] libmachine: (addons-413632)     <apic/>
	I1025 21:35:52.007064  669884 main.go:141] libmachine: (addons-413632)     <pae/>
	I1025 21:35:52.007074  669884 main.go:141] libmachine: (addons-413632)     
	I1025 21:35:52.007080  669884 main.go:141] libmachine: (addons-413632)   </features>
	I1025 21:35:52.007087  669884 main.go:141] libmachine: (addons-413632)   <cpu mode='host-passthrough'>
	I1025 21:35:52.007093  669884 main.go:141] libmachine: (addons-413632)   
	I1025 21:35:52.007099  669884 main.go:141] libmachine: (addons-413632)   </cpu>
	I1025 21:35:52.007104  669884 main.go:141] libmachine: (addons-413632)   <os>
	I1025 21:35:52.007125  669884 main.go:141] libmachine: (addons-413632)     <type>hvm</type>
	I1025 21:35:52.007133  669884 main.go:141] libmachine: (addons-413632)     <boot dev='cdrom'/>
	I1025 21:35:52.007137  669884 main.go:141] libmachine: (addons-413632)     <boot dev='hd'/>
	I1025 21:35:52.007145  669884 main.go:141] libmachine: (addons-413632)     <bootmenu enable='no'/>
	I1025 21:35:52.007149  669884 main.go:141] libmachine: (addons-413632)   </os>
	I1025 21:35:52.007155  669884 main.go:141] libmachine: (addons-413632)   <devices>
	I1025 21:35:52.007159  669884 main.go:141] libmachine: (addons-413632)     <disk type='file' device='cdrom'>
	I1025 21:35:52.007169  669884 main.go:141] libmachine: (addons-413632)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/boot2docker.iso'/>
	I1025 21:35:52.007179  669884 main.go:141] libmachine: (addons-413632)       <target dev='hdc' bus='scsi'/>
	I1025 21:35:52.007186  669884 main.go:141] libmachine: (addons-413632)       <readonly/>
	I1025 21:35:52.007191  669884 main.go:141] libmachine: (addons-413632)     </disk>
	I1025 21:35:52.007202  669884 main.go:141] libmachine: (addons-413632)     <disk type='file' device='disk'>
	I1025 21:35:52.007213  669884 main.go:141] libmachine: (addons-413632)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1025 21:35:52.007220  669884 main.go:141] libmachine: (addons-413632)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/addons-413632.rawdisk'/>
	I1025 21:35:52.007230  669884 main.go:141] libmachine: (addons-413632)       <target dev='hda' bus='virtio'/>
	I1025 21:35:52.007235  669884 main.go:141] libmachine: (addons-413632)     </disk>
	I1025 21:35:52.007244  669884 main.go:141] libmachine: (addons-413632)     <interface type='network'>
	I1025 21:35:52.007250  669884 main.go:141] libmachine: (addons-413632)       <source network='mk-addons-413632'/>
	I1025 21:35:52.007256  669884 main.go:141] libmachine: (addons-413632)       <model type='virtio'/>
	I1025 21:35:52.007261  669884 main.go:141] libmachine: (addons-413632)     </interface>
	I1025 21:35:52.007266  669884 main.go:141] libmachine: (addons-413632)     <interface type='network'>
	I1025 21:35:52.007271  669884 main.go:141] libmachine: (addons-413632)       <source network='default'/>
	I1025 21:35:52.007277  669884 main.go:141] libmachine: (addons-413632)       <model type='virtio'/>
	I1025 21:35:52.007282  669884 main.go:141] libmachine: (addons-413632)     </interface>
	I1025 21:35:52.007288  669884 main.go:141] libmachine: (addons-413632)     <serial type='pty'>
	I1025 21:35:52.007293  669884 main.go:141] libmachine: (addons-413632)       <target port='0'/>
	I1025 21:35:52.007299  669884 main.go:141] libmachine: (addons-413632)     </serial>
	I1025 21:35:52.007304  669884 main.go:141] libmachine: (addons-413632)     <console type='pty'>
	I1025 21:35:52.007310  669884 main.go:141] libmachine: (addons-413632)       <target type='serial' port='0'/>
	I1025 21:35:52.007315  669884 main.go:141] libmachine: (addons-413632)     </console>
	I1025 21:35:52.007323  669884 main.go:141] libmachine: (addons-413632)     <rng model='virtio'>
	I1025 21:35:52.007354  669884 main.go:141] libmachine: (addons-413632)       <backend model='random'>/dev/random</backend>
	I1025 21:35:52.007378  669884 main.go:141] libmachine: (addons-413632)     </rng>
	I1025 21:35:52.007392  669884 main.go:141] libmachine: (addons-413632)     
	I1025 21:35:52.007401  669884 main.go:141] libmachine: (addons-413632)     
	I1025 21:35:52.007409  669884 main.go:141] libmachine: (addons-413632)   </devices>
	I1025 21:35:52.007416  669884 main.go:141] libmachine: (addons-413632) </domain>
	I1025 21:35:52.007428  669884 main.go:141] libmachine: (addons-413632) 
	I1025 21:35:52.011980  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:61:2d:1d in network default
	I1025 21:35:52.012549  669884 main.go:141] libmachine: (addons-413632) starting domain...
	I1025 21:35:52.012567  669884 main.go:141] libmachine: (addons-413632) ensuring networks are active...
	I1025 21:35:52.012578  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:52.013279  669884 main.go:141] libmachine: (addons-413632) Ensuring network default is active
	I1025 21:35:52.013597  669884 main.go:141] libmachine: (addons-413632) Ensuring network mk-addons-413632 is active
	I1025 21:35:52.014118  669884 main.go:141] libmachine: (addons-413632) getting domain XML...
	I1025 21:35:52.014892  669884 main.go:141] libmachine: (addons-413632) creating domain...
	I1025 21:35:53.200526  669884 main.go:141] libmachine: (addons-413632) waiting for IP...
	I1025 21:35:53.201456  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:53.201806  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:53.201886  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:53.201823  669906 retry.go:31] will retry after 247.899943ms: waiting for domain to come up
	I1025 21:35:53.451491  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:53.451996  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:53.452040  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:53.451964  669906 retry.go:31] will retry after 319.364472ms: waiting for domain to come up
	I1025 21:35:53.772482  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:53.772945  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:53.772985  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:53.772904  669906 retry.go:31] will retry after 331.396051ms: waiting for domain to come up
	I1025 21:35:54.105649  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:54.106095  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:54.106136  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:54.106070  669906 retry.go:31] will retry after 553.832242ms: waiting for domain to come up
	I1025 21:35:54.661791  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:54.662234  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:54.662289  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:54.662209  669906 retry.go:31] will retry after 552.909314ms: waiting for domain to come up
	I1025 21:35:55.217847  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:55.218251  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:55.218304  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:55.218238  669906 retry.go:31] will retry after 751.938155ms: waiting for domain to come up
	I1025 21:35:55.972115  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:55.972523  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:55.972561  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:55.972484  669906 retry.go:31] will retry after 1.136661726s: waiting for domain to come up
	I1025 21:35:57.110430  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:57.110930  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:57.110958  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:57.110875  669906 retry.go:31] will retry after 1.015893365s: waiting for domain to come up
	I1025 21:35:58.128288  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:58.128677  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:58.128718  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:58.128640  669906 retry.go:31] will retry after 1.174270445s: waiting for domain to come up
	I1025 21:35:59.304992  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:35:59.305371  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:35:59.305398  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:35:59.305337  669906 retry.go:31] will retry after 2.011576373s: waiting for domain to come up
	I1025 21:36:01.318687  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:01.319085  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:36:01.319114  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:36:01.319070  669906 retry.go:31] will retry after 2.767085669s: waiting for domain to come up
	I1025 21:36:04.089930  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:04.090383  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:36:04.090412  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:36:04.090322  669906 retry.go:31] will retry after 2.389221118s: waiting for domain to come up
	I1025 21:36:06.481504  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:06.482050  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:36:06.482078  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:36:06.481996  669906 retry.go:31] will retry after 4.019884751s: waiting for domain to come up
	I1025 21:36:10.506341  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:10.506867  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find current IP address of domain addons-413632 in network mk-addons-413632
	I1025 21:36:10.506909  669884 main.go:141] libmachine: (addons-413632) DBG | I1025 21:36:10.506847  669906 retry.go:31] will retry after 4.731359986s: waiting for domain to come up
	I1025 21:36:15.242714  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.243078  669884 main.go:141] libmachine: (addons-413632) found domain IP: 192.168.39.223
	I1025 21:36:15.243102  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has current primary IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.243108  669884 main.go:141] libmachine: (addons-413632) reserving static IP address...
	I1025 21:36:15.243524  669884 main.go:141] libmachine: (addons-413632) DBG | unable to find host DHCP lease matching {name: "addons-413632", mac: "52:54:00:7e:f7:68", ip: "192.168.39.223"} in network mk-addons-413632
	I1025 21:36:15.320441  669884 main.go:141] libmachine: (addons-413632) DBG | Getting to WaitForSSH function...
	I1025 21:36:15.320480  669884 main.go:141] libmachine: (addons-413632) reserved static IP address 192.168.39.223 for domain addons-413632
	I1025 21:36:15.320494  669884 main.go:141] libmachine: (addons-413632) waiting for SSH...
	I1025 21:36:15.323692  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.324228  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.324258  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.324407  669884 main.go:141] libmachine: (addons-413632) DBG | Using SSH client type: external
	I1025 21:36:15.324438  669884 main.go:141] libmachine: (addons-413632) DBG | Using SSH private key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa (-rw-------)
	I1025 21:36:15.324483  669884 main.go:141] libmachine: (addons-413632) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 21:36:15.324501  669884 main.go:141] libmachine: (addons-413632) DBG | About to run SSH command:
	I1025 21:36:15.324513  669884 main.go:141] libmachine: (addons-413632) DBG | exit 0
	I1025 21:36:15.449310  669884 main.go:141] libmachine: (addons-413632) DBG | SSH cmd err, output: <nil>: 
	I1025 21:36:15.449563  669884 main.go:141] libmachine: (addons-413632) KVM machine creation complete
	I1025 21:36:15.449854  669884 main.go:141] libmachine: (addons-413632) Calling .GetConfigRaw
	I1025 21:36:15.450539  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:15.450724  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:15.450883  669884 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1025 21:36:15.450899  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:15.452241  669884 main.go:141] libmachine: Detecting operating system of created instance...
	I1025 21:36:15.452257  669884 main.go:141] libmachine: Waiting for SSH to be available...
	I1025 21:36:15.452263  669884 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 21:36:15.452272  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.454485  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.454849  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.454882  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.455002  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:15.455190  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.455480  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.455652  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:15.455843  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:15.456058  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:15.456072  669884 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 21:36:15.560496  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:36:15.560551  669884 main.go:141] libmachine: Detecting the provisioner...
	I1025 21:36:15.560565  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.563666  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.564106  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.564131  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.564257  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:15.564510  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.564682  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.564833  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:15.565024  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:15.565210  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:15.565221  669884 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1025 21:36:15.669809  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1025 21:36:15.669909  669884 main.go:141] libmachine: found compatible host: buildroot
	I1025 21:36:15.669919  669884 main.go:141] libmachine: Provisioning with buildroot...
	I1025 21:36:15.669927  669884 main.go:141] libmachine: (addons-413632) Calling .GetMachineName
	I1025 21:36:15.670214  669884 buildroot.go:166] provisioning hostname "addons-413632"
	I1025 21:36:15.670246  669884 main.go:141] libmachine: (addons-413632) Calling .GetMachineName
	I1025 21:36:15.670499  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.673011  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.673378  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.673404  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.673574  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:15.673785  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.673942  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.674077  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:15.674222  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:15.674437  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:15.674453  669884 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-413632 && echo "addons-413632" | sudo tee /etc/hostname
	I1025 21:36:15.790900  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-413632
	
	I1025 21:36:15.790934  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.793816  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.794142  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.794165  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.794322  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:15.794520  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.794675  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:15.794869  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:15.795108  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:15.795307  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:15.795325  669884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-413632' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-413632/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-413632' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:36:15.906205  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:36:15.906253  669884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19758-661979/.minikube CaCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19758-661979/.minikube}
	I1025 21:36:15.906287  669884 buildroot.go:174] setting up certificates
	I1025 21:36:15.906302  669884 provision.go:84] configureAuth start
	I1025 21:36:15.906319  669884 main.go:141] libmachine: (addons-413632) Calling .GetMachineName
	I1025 21:36:15.906637  669884 main.go:141] libmachine: (addons-413632) Calling .GetIP
	I1025 21:36:15.909098  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.909455  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.909480  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.909632  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:15.911884  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.912228  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:15.912260  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:15.912356  669884 provision.go:143] copyHostCerts
	I1025 21:36:15.912470  669884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem (1082 bytes)
	I1025 21:36:15.912622  669884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem (1123 bytes)
	I1025 21:36:15.912716  669884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem (1675 bytes)
	I1025 21:36:15.912795  669884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem org=jenkins.addons-413632 san=[127.0.0.1 192.168.39.223 addons-413632 localhost minikube]
	I1025 21:36:16.033557  669884 provision.go:177] copyRemoteCerts
	I1025 21:36:16.033651  669884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:36:16.033692  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.036314  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.036678  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.036708  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.036875  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.037083  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.037256  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.037397  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:16.120241  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 21:36:16.145205  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 21:36:16.169800  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 21:36:16.194364  669884 provision.go:87] duration metric: took 288.042002ms to configureAuth
	I1025 21:36:16.194399  669884 buildroot.go:189] setting minikube options for container-runtime
	I1025 21:36:16.194623  669884 config.go:182] Loaded profile config "addons-413632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 21:36:16.194735  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.197803  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.198372  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.198405  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.198551  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.198734  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.198893  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.199025  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.199189  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:16.199416  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:16.199438  669884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 21:36:16.413395  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 21:36:16.413429  669884 main.go:141] libmachine: Checking connection to Docker...
	I1025 21:36:16.413441  669884 main.go:141] libmachine: (addons-413632) Calling .GetURL
	I1025 21:36:16.414935  669884 main.go:141] libmachine: (addons-413632) DBG | using libvirt version 6000000
	I1025 21:36:16.417165  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.417587  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.417621  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.417792  669884 main.go:141] libmachine: Docker is up and running!
	I1025 21:36:16.417809  669884 main.go:141] libmachine: Reticulating splines...
	I1025 21:36:16.417819  669884 client.go:171] duration metric: took 25.136942248s to LocalClient.Create
	I1025 21:36:16.417850  669884 start.go:167] duration metric: took 25.13703198s to libmachine.API.Create "addons-413632"
	I1025 21:36:16.417861  669884 start.go:293] postStartSetup for "addons-413632" (driver="kvm2")
	I1025 21:36:16.417873  669884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:36:16.417898  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.418102  669884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:36:16.418128  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.420283  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.420601  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.420622  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.420767  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.420928  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.421126  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.421250  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:16.503240  669884 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:36:16.507856  669884 info.go:137] Remote host: Buildroot 2023.02.9
	I1025 21:36:16.507890  669884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/addons for local assets ...
	I1025 21:36:16.507987  669884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/files for local assets ...
	I1025 21:36:16.508017  669884 start.go:296] duration metric: took 90.147535ms for postStartSetup
	I1025 21:36:16.508063  669884 main.go:141] libmachine: (addons-413632) Calling .GetConfigRaw
	I1025 21:36:16.508719  669884 main.go:141] libmachine: (addons-413632) Calling .GetIP
	I1025 21:36:16.511339  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.511665  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.511689  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.511990  669884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/config.json ...
	I1025 21:36:16.512167  669884 start.go:128] duration metric: took 25.249299624s to createHost
	I1025 21:36:16.512191  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.514506  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.514816  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.514843  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.514950  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.515106  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.515317  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.515477  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.515675  669884 main.go:141] libmachine: Using SSH client type: native
	I1025 21:36:16.515893  669884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.223 22 <nil> <nil>}
	I1025 21:36:16.515906  669884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 21:36:16.617893  669884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729892176.594671389
	
	I1025 21:36:16.617923  669884 fix.go:216] guest clock: 1729892176.594671389
	I1025 21:36:16.617936  669884 fix.go:229] Guest: 2024-10-25 21:36:16.594671389 +0000 UTC Remote: 2024-10-25 21:36:16.512180095 +0000 UTC m=+25.356671505 (delta=82.491294ms)
	I1025 21:36:16.617995  669884 fix.go:200] guest clock delta is within tolerance: 82.491294ms
	I1025 21:36:16.618003  669884 start.go:83] releasing machines lock for "addons-413632", held for 25.355225557s
	I1025 21:36:16.618054  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.618334  669884 main.go:141] libmachine: (addons-413632) Calling .GetIP
	I1025 21:36:16.621183  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.621678  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.621707  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.621806  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.622303  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.622512  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:16.622660  669884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:36:16.622720  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.622736  669884 ssh_runner.go:195] Run: cat /version.json
	I1025 21:36:16.622756  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:16.625259  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.625546  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.625624  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.625651  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.625818  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.625946  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.625958  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:16.625983  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:16.626053  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.626179  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:16.626193  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:16.626393  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:16.626558  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:16.626726  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:16.721210  669884 ssh_runner.go:195] Run: systemctl --version
	I1025 21:36:16.727205  669884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 21:36:16.881797  669884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 21:36:16.888251  669884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 21:36:16.888328  669884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 21:36:16.903932  669884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 21:36:16.903960  669884 start.go:495] detecting cgroup driver to use...
	I1025 21:36:16.904053  669884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:36:16.920935  669884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:36:16.935408  669884 docker.go:217] disabling cri-docker service (if available) ...
	I1025 21:36:16.935483  669884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 21:36:16.949263  669884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 21:36:16.962845  669884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 21:36:17.081385  669884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 21:36:17.230022  669884 docker.go:233] disabling docker service ...
	I1025 21:36:17.230109  669884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 21:36:17.243663  669884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 21:36:17.256627  669884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 21:36:17.373215  669884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 21:36:17.486145  669884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 21:36:17.500039  669884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:36:17.518863  669884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1025 21:36:17.518928  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.529000  669884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 21:36:17.529073  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.538844  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.548682  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.558609  669884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:36:17.569456  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.579248  669884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.596099  669884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 21:36:17.606430  669884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:36:17.615714  669884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 21:36:17.615775  669884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 21:36:17.628384  669884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:36:17.637251  669884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:36:17.744844  669884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 21:36:17.843524  669884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 21:36:17.843651  669884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 21:36:17.848270  669884 start.go:563] Will wait 60s for crictl version
	I1025 21:36:17.848341  669884 ssh_runner.go:195] Run: which crictl
	I1025 21:36:17.852163  669884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 21:36:17.891910  669884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 21:36:17.892035  669884 ssh_runner.go:195] Run: crio --version
	I1025 21:36:17.921336  669884 ssh_runner.go:195] Run: crio --version
	I1025 21:36:17.949912  669884 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1025 21:36:17.951263  669884 main.go:141] libmachine: (addons-413632) Calling .GetIP
	I1025 21:36:17.953798  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:17.954128  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:17.954149  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:17.954391  669884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 21:36:17.958477  669884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:36:17.970796  669884 kubeadm.go:883] updating cluster {Name:addons-413632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 21:36:17.970937  669884 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 21:36:17.971003  669884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:36:18.002920  669884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1025 21:36:18.003015  669884 ssh_runner.go:195] Run: which lz4
	I1025 21:36:18.007126  669884 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 21:36:18.011303  669884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 21:36:18.011340  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1025 21:36:19.275749  669884 crio.go:462] duration metric: took 1.268650384s to copy over tarball
	I1025 21:36:19.275843  669884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 21:36:21.294361  669884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.018480849s)
	I1025 21:36:21.294399  669884 crio.go:469] duration metric: took 2.018613788s to extract the tarball
	I1025 21:36:21.294409  669884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 21:36:21.330953  669884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 21:36:21.371676  669884 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 21:36:21.371707  669884 cache_images.go:84] Images are preloaded, skipping loading
	I1025 21:36:21.371719  669884 kubeadm.go:934] updating node { 192.168.39.223 8443 v1.31.1 crio true true} ...
	I1025 21:36:21.371887  669884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-413632 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 21:36:21.371986  669884 ssh_runner.go:195] Run: crio config
	I1025 21:36:21.419959  669884 cni.go:84] Creating CNI manager for ""
	I1025 21:36:21.419990  669884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 21:36:21.420002  669884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 21:36:21.420027  669884 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.223 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-413632 NodeName:addons-413632 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 21:36:21.420162  669884 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-413632"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.223"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.223"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:36:21.420227  669884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1025 21:36:21.430234  669884 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 21:36:21.430357  669884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 21:36:21.439725  669884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1025 21:36:21.455901  669884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 21:36:21.472025  669884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1025 21:36:21.488209  669884 ssh_runner.go:195] Run: grep 192.168.39.223	control-plane.minikube.internal$ /etc/hosts
	I1025 21:36:21.492109  669884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:36:21.504035  669884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:36:21.620483  669884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 21:36:21.637044  669884 certs.go:68] Setting up /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632 for IP: 192.168.39.223
	I1025 21:36:21.637085  669884 certs.go:194] generating shared ca certs ...
	I1025 21:36:21.637104  669884 certs.go:226] acquiring lock for ca certs: {Name:mkb3c0b02dcf77d22d8e250175d24731a46db4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:21.637246  669884 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key
	I1025 21:36:22.081934  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt ...
	I1025 21:36:22.081969  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt: {Name:mk10b67a27736d7b414ef7e521efaaacec6f86c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.082139  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key ...
	I1025 21:36:22.082151  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key: {Name:mk1fd55252adf9d9b1a030feaa4972e9322c045b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.082227  669884 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key
	I1025 21:36:22.304318  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt ...
	I1025 21:36:22.304366  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt: {Name:mk56f11ac9b1532ad69157352f1cd54574c645d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.304576  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key ...
	I1025 21:36:22.304591  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key: {Name:mkc333ffd280e59c54a994e4e4c8add83c7ab6ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.304695  669884 certs.go:256] generating profile certs ...
	I1025 21:36:22.304774  669884 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.key
	I1025 21:36:22.304795  669884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt with IP's: []
	I1025 21:36:22.376085  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt ...
	I1025 21:36:22.376120  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: {Name:mkc5e4212d9a8dde3be38daf78f02c0285f89735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.376311  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.key ...
	I1025 21:36:22.376328  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.key: {Name:mk9909e2be6c3c6a3f771f2b423c290c186664aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.376434  669884 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key.1ca821e7
	I1025 21:36:22.376460  669884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt.1ca821e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.223]
	I1025 21:36:22.504167  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt.1ca821e7 ...
	I1025 21:36:22.504204  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt.1ca821e7: {Name:mk464e0a6b34270037fef5f7a4097ab13384dc9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.504400  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key.1ca821e7 ...
	I1025 21:36:22.504419  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key.1ca821e7: {Name:mkb4f690767180584744b21ee4c51de30043fedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.504528  669884 certs.go:381] copying /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt.1ca821e7 -> /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt
	I1025 21:36:22.504626  669884 certs.go:385] copying /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key.1ca821e7 -> /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key
	I1025 21:36:22.504698  669884 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.key
	I1025 21:36:22.504725  669884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.crt with IP's: []
	I1025 21:36:22.899058  669884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.crt ...
	I1025 21:36:22.899099  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.crt: {Name:mk419a80e72150ee18d6bfe94f69c26e1d08c083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.899295  669884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.key ...
	I1025 21:36:22.899313  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.key: {Name:mkcf87d4ad053979bc38054885bc6495ae16e62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:22.899526  669884 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:36:22.899578  669884 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem (1082 bytes)
	I1025 21:36:22.899695  669884 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:36:22.899805  669884 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem (1675 bytes)
	I1025 21:36:22.900486  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:36:22.929285  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 21:36:22.953806  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:36:22.984720  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 21:36:23.019238  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 21:36:23.048916  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 21:36:23.072529  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 21:36:23.096851  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 21:36:23.120865  669884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:36:23.143864  669884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 21:36:23.160540  669884 ssh_runner.go:195] Run: openssl version
	I1025 21:36:23.166321  669884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:36:23.178390  669884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:36:23.182751  669884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:36:23.182818  669884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:36:23.188658  669884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:36:23.199602  669884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 21:36:23.203757  669884 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 21:36:23.203809  669884 kubeadm.go:392] StartCluster: {Name:addons-413632 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-413632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 21:36:23.203888  669884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 21:36:23.203928  669884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 21:36:23.240894  669884 cri.go:89] found id: ""
	I1025 21:36:23.240979  669884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 21:36:23.251640  669884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 21:36:23.261709  669884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 21:36:23.271502  669884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:36:23.271525  669884 kubeadm.go:157] found existing configuration files:
	
	I1025 21:36:23.271581  669884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 21:36:23.280930  669884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 21:36:23.281016  669884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 21:36:23.290723  669884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 21:36:23.299803  669884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 21:36:23.299867  669884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 21:36:23.309001  669884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 21:36:23.317926  669884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 21:36:23.317980  669884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 21:36:23.328613  669884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 21:36:23.337760  669884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 21:36:23.337817  669884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 21:36:23.348337  669884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 21:36:23.497806  669884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 21:36:33.317654  669884 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1025 21:36:33.317727  669884 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 21:36:33.317843  669884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 21:36:33.317975  669884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 21:36:33.318115  669884 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 21:36:33.318214  669884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 21:36:33.319838  669884 out.go:235]   - Generating certificates and keys ...
	I1025 21:36:33.319913  669884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 21:36:33.319977  669884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 21:36:33.320059  669884 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 21:36:33.320127  669884 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1025 21:36:33.320195  669884 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1025 21:36:33.320276  669884 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1025 21:36:33.320372  669884 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1025 21:36:33.320529  669884 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-413632 localhost] and IPs [192.168.39.223 127.0.0.1 ::1]
	I1025 21:36:33.320616  669884 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1025 21:36:33.320753  669884 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-413632 localhost] and IPs [192.168.39.223 127.0.0.1 ::1]
	I1025 21:36:33.320812  669884 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 21:36:33.320872  669884 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 21:36:33.320920  669884 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1025 21:36:33.321032  669884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 21:36:33.321113  669884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 21:36:33.321197  669884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 21:36:33.321259  669884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 21:36:33.321343  669884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 21:36:33.321412  669884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 21:36:33.321598  669884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 21:36:33.321710  669884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 21:36:33.323400  669884 out.go:235]   - Booting up control plane ...
	I1025 21:36:33.323506  669884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 21:36:33.323602  669884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 21:36:33.323691  669884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 21:36:33.323820  669884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 21:36:33.323928  669884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 21:36:33.323981  669884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 21:36:33.324094  669884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 21:36:33.324182  669884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 21:36:33.324233  669884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.012776ms
	I1025 21:36:33.324293  669884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1025 21:36:33.324345  669884 kubeadm.go:310] [api-check] The API server is healthy after 5.5014081s
	I1025 21:36:33.324437  669884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 21:36:33.324542  669884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 21:36:33.324592  669884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 21:36:33.324760  669884 kubeadm.go:310] [mark-control-plane] Marking the node addons-413632 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 21:36:33.324825  669884 kubeadm.go:310] [bootstrap-token] Using token: nzx9mz.98l3h3sqt096xbnb
	I1025 21:36:33.326342  669884 out.go:235]   - Configuring RBAC rules ...
	I1025 21:36:33.326431  669884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 21:36:33.326502  669884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 21:36:33.326752  669884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 21:36:33.327130  669884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 21:36:33.327454  669884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 21:36:33.327676  669884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 21:36:33.327963  669884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 21:36:33.328076  669884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 21:36:33.328292  669884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 21:36:33.328345  669884 kubeadm.go:310] 
	I1025 21:36:33.328530  669884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 21:36:33.328545  669884 kubeadm.go:310] 
	I1025 21:36:33.328856  669884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 21:36:33.328871  669884 kubeadm.go:310] 
	I1025 21:36:33.328919  669884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 21:36:33.329006  669884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 21:36:33.329053  669884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 21:36:33.329060  669884 kubeadm.go:310] 
	I1025 21:36:33.329104  669884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 21:36:33.329110  669884 kubeadm.go:310] 
	I1025 21:36:33.329165  669884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 21:36:33.329176  669884 kubeadm.go:310] 
	I1025 21:36:33.329232  669884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 21:36:33.329297  669884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 21:36:33.329355  669884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 21:36:33.329361  669884 kubeadm.go:310] 
	I1025 21:36:33.329437  669884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 21:36:33.329510  669884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 21:36:33.329516  669884 kubeadm.go:310] 
	I1025 21:36:33.329585  669884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nzx9mz.98l3h3sqt096xbnb \
	I1025 21:36:33.329673  669884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a \
	I1025 21:36:33.329694  669884 kubeadm.go:310] 	--control-plane 
	I1025 21:36:33.329701  669884 kubeadm.go:310] 
	I1025 21:36:33.329769  669884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 21:36:33.329775  669884 kubeadm.go:310] 
	I1025 21:36:33.329862  669884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nzx9mz.98l3h3sqt096xbnb \
	I1025 21:36:33.330026  669884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a 
	I1025 21:36:33.330040  669884 cni.go:84] Creating CNI manager for ""
	I1025 21:36:33.330047  669884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 21:36:33.331637  669884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 21:36:33.332891  669884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 21:36:33.343974  669884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 21:36:33.365297  669884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 21:36:33.365418  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-413632 minikube.k8s.io/updated_at=2024_10_25T21_36_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=addons-413632 minikube.k8s.io/primary=true
	I1025 21:36:33.365426  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:33.388517  669884 ops.go:34] apiserver oom_adj: -16
	I1025 21:36:33.517793  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:34.018392  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:34.518537  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:35.017962  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:35.518541  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:36.018446  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:36.518757  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:37.018130  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:37.518182  669884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 21:36:37.617427  669884 kubeadm.go:1113] duration metric: took 4.252098973s to wait for elevateKubeSystemPrivileges
	I1025 21:36:37.617478  669884 kubeadm.go:394] duration metric: took 14.413673011s to StartCluster
	I1025 21:36:37.617504  669884 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:37.617669  669884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 21:36:37.618212  669884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:36:37.618454  669884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 21:36:37.618491  669884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.223 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 21:36:37.618546  669884 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 21:36:37.618668  669884 addons.go:69] Setting yakd=true in profile "addons-413632"
	I1025 21:36:37.618687  669884 addons.go:69] Setting ingress=true in profile "addons-413632"
	I1025 21:36:37.618698  669884 addons.go:234] Setting addon yakd=true in "addons-413632"
	I1025 21:36:37.618703  669884 addons.go:69] Setting ingress-dns=true in profile "addons-413632"
	I1025 21:36:37.618703  669884 addons.go:69] Setting volcano=true in profile "addons-413632"
	I1025 21:36:37.618724  669884 addons.go:234] Setting addon ingress-dns=true in "addons-413632"
	I1025 21:36:37.618701  669884 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-413632"
	I1025 21:36:37.618736  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618738  669884 addons.go:69] Setting volumesnapshots=true in profile "addons-413632"
	I1025 21:36:37.618742  669884 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-413632"
	I1025 21:36:37.618749  669884 addons.go:234] Setting addon volumesnapshots=true in "addons-413632"
	I1025 21:36:37.618747  669884 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-413632"
	I1025 21:36:37.618755  669884 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-413632"
	I1025 21:36:37.618777  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618782  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618787  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618785  669884 config.go:182] Loaded profile config "addons-413632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 21:36:37.618842  669884 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-413632"
	I1025 21:36:37.618725  669884 addons.go:234] Setting addon volcano=true in "addons-413632"
	I1025 21:36:37.618877  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.618889  669884 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-413632"
	I1025 21:36:37.618917  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619224  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619229  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619239  669884 addons.go:69] Setting storage-provisioner=true in profile "addons-413632"
	I1025 21:36:37.619251  669884 addons.go:234] Setting addon storage-provisioner=true in "addons-413632"
	I1025 21:36:37.618709  669884 addons.go:234] Setting addon ingress=true in "addons-413632"
	I1025 21:36:37.619263  669884 addons.go:69] Setting cloud-spanner=true in profile "addons-413632"
	I1025 21:36:37.619271  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619276  669884 addons.go:69] Setting metrics-server=true in profile "addons-413632"
	I1025 21:36:37.619266  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619287  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619291  669884 addons.go:234] Setting addon metrics-server=true in "addons-413632"
	I1025 21:36:37.619292  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619314  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619324  669884 addons.go:69] Setting default-storageclass=true in profile "addons-413632"
	I1025 21:36:37.619338  669884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-413632"
	I1025 21:36:37.619530  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619561  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619596  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619623  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619651  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619672  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619681  669884 addons.go:69] Setting gcp-auth=true in profile "addons-413632"
	I1025 21:36:37.619686  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619697  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619711  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619252  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619698  669884 mustload.go:65] Loading cluster: addons-413632
	I1025 21:36:37.619776  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619266  669884 addons.go:69] Setting inspektor-gadget=true in profile "addons-413632"
	I1025 21:36:37.619867  669884 addons.go:234] Setting addon inspektor-gadget=true in "addons-413632"
	I1025 21:36:37.619944  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619964  669884 config.go:182] Loaded profile config "addons-413632": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 21:36:37.618684  669884 addons.go:69] Setting registry=true in profile "addons-413632"
	I1025 21:36:37.620100  669884 addons.go:234] Setting addon registry=true in "addons-413632"
	I1025 21:36:37.620128  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.619227  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.620300  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.620314  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620329  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620385  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.620414  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620473  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.619279  669884 addons.go:234] Setting addon cloud-spanner=true in "addons-413632"
	I1025 21:36:37.620508  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620516  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.620477  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.620603  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.620775  669884 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-413632"
	I1025 21:36:37.620829  669884 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-413632"
	I1025 21:36:37.620869  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.621494  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.621569  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.621656  669884 out.go:177] * Verifying Kubernetes components...
	I1025 21:36:37.619315  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.619227  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.622009  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.641107  669884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:36:37.641337  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.641386  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.641556  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43527
	I1025 21:36:37.641624  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44359
	I1025 21:36:37.641740  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43303
	I1025 21:36:37.641816  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I1025 21:36:37.641879  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I1025 21:36:37.641946  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
	I1025 21:36:37.642090  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.642275  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.642379  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.642566  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.642583  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.642694  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.642866  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.643033  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.643033  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.643046  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.643160  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.643173  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.643217  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.644094  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.644114  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.644207  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.644245  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.644260  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.644274  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.644331  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.644462  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.644481  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.644569  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.644620  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.644894  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.645989  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.646025  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.653408  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.653460  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.653932  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.656122  669884 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-413632"
	I1025 21:36:37.656164  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.656506  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.656540  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.658481  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.658821  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.658873  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.659677  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.659716  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.671456  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38099
	I1025 21:36:37.672122  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.672778  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.672798  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.673213  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.673415  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.673882  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40159
	I1025 21:36:37.674423  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.674971  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.674989  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.675384  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.676428  669884 addons.go:234] Setting addon default-storageclass=true in "addons-413632"
	I1025 21:36:37.676468  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.676838  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.676879  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.677641  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.677683  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.679275  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36373
	I1025 21:36:37.679722  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.680232  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.680257  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.680605  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.680750  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.681374  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I1025 21:36:37.681926  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.682476  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.682506  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.682568  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.683041  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.683715  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.683753  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.684592  669884 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 21:36:37.685808  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 21:36:37.685837  669884 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 21:36:37.685859  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.687341  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38551
	I1025 21:36:37.687846  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.688337  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.688362  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.688722  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.689290  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.689333  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.689529  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.689561  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.689586  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.689766  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.689946  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.690101  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.690212  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.691106  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45777
	I1025 21:36:37.691467  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.691954  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.691971  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.692310  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.692489  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.694186  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:37.694555  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.694591  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.695238  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
	I1025 21:36:37.695695  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.696201  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.696226  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.696586  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.696929  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.702659  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I1025 21:36:37.703102  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.703898  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.704050  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36529
	I1025 21:36:37.704661  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.704685  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.704760  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.704947  669884 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1025 21:36:37.705162  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.705741  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.705797  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.706184  669884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 21:36:37.706211  669884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 21:36:37.706235  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.707098  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.707116  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.707524  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.708152  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.708197  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.710211  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.710420  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42767
	I1025 21:36:37.710911  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.710933  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.711141  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.711341  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.711511  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.711649  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.711962  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.712060  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35041
	I1025 21:36:37.712541  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.712685  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I1025 21:36:37.713055  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.713066  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43799
	I1025 21:36:37.713075  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.713425  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.713444  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.713504  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.713605  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.713866  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.714074  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.714088  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.714227  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.714275  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.714511  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.715087  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.715124  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.715343  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.716054  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.716097  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.716367  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.716379  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.716790  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.717046  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.718872  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.721063  669884 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 21:36:37.722494  669884 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 21:36:37.722515  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 21:36:37.722538  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.725679  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
	I1025 21:36:37.725852  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.726565  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.726583  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.726603  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.726785  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.726988  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.727172  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.727345  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.727796  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.727813  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.728849  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.729095  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.730684  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.731424  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36163
	I1025 21:36:37.732015  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.732520  669884 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1025 21:36:37.732618  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.732638  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.733156  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.733831  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.733876  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.734008  669884 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:36:37.734024  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1025 21:36:37.734043  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.736497  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.736840  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.736870  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.737536  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.737726  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.737897  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.738113  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I1025 21:36:37.738107  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.738727  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.739269  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.739289  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.739714  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.739904  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.741442  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.743137  669884 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1025 21:36:37.744377  669884 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1025 21:36:37.745638  669884 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1025 21:36:37.747102  669884 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:36:37.747127  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 21:36:37.747148  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.747455  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I1025 21:36:37.747845  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.748354  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.748371  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.748742  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.748933  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.750844  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.751091  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.751110  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.751342  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37029
	I1025 21:36:37.751538  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I1025 21:36:37.751580  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.751697  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.751793  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.751870  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.751952  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.752153  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.752520  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.752536  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.752548  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.752571  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.753626  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46091
	I1025 21:36:37.753922  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.754154  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I1025 21:36:37.754418  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.754434  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.754502  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.754955  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.754962  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.754981  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.755147  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.755899  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.755923  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.756322  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.756556  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.756835  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.757439  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.757642  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.758212  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:37.758219  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.758259  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:37.758277  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
	I1025 21:36:37.758689  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.759167  669884 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1025 21:36:37.759243  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.759267  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I1025 21:36:37.759294  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I1025 21:36:37.759648  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.760120  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.760171  669884 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1025 21:36:37.760657  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.760674  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.760216  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.760234  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 21:36:37.760243  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.761141  669884 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:36:37.761159  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 21:36:37.761176  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.761317  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.761331  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.761415  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.761694  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.761803  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.762158  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.762163  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.763412  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.763548  669884 out.go:177]   - Using image docker.io/registry:2.8.3
	I1025 21:36:37.763656  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:37.763697  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I1025 21:36:37.763668  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:37.764074  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:37.764091  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:37.764099  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:37.764105  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:37.764189  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.764262  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.764586  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:37.764620  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:37.764878  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	W1025 21:36:37.764990  669884 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 21:36:37.765038  669884 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 21:36:37.765054  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 21:36:37.765068  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.764735  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.765117  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.765146  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 21:36:37.766226  669884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:36:37.767101  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.767351  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.767553  669884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:36:37.767570  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 21:36:37.767587  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.767660  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 21:36:37.767928  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I1025 21:36:37.768491  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.768936  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.769495  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.769515  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.769697  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.769770  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.769874  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.769924  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.769999  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.770169  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.770472  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.770488  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.770545  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 21:36:37.770687  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.771267  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.771412  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.771492  669884 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1025 21:36:37.771539  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.772002  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.772018  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.771628  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.772199  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.772219  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.772295  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.772500  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.772537  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.772643  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.772970  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.773023  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.773225  669884 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1025 21:36:37.773514  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 21:36:37.773533  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.773925  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 21:36:37.773926  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.774690  669884 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1025 21:36:37.774779  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.775021  669884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 21:36:37.775034  669884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 21:36:37.775049  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.775981  669884 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 21:36:37.776000  669884 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1025 21:36:37.776018  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.777148  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.777249  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 21:36:37.777619  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.777644  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.777753  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.777914  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.778058  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.778183  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.778291  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.778806  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.778826  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.778907  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.779069  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.779232  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.779533  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.779793  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 21:36:37.780895  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.781210  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I1025 21:36:37.781399  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.781422  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.781705  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.781716  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.781870  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.782011  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.782150  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.782207  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.782226  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.782312  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 21:36:37.782580  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.782757  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.783557  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 21:36:37.783581  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 21:36:37.783607  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.783992  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.785802  669884 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 21:36:37.786492  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.786899  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.786925  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.787094  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.787366  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.787542  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.787689  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:37.788310  669884 out.go:177]   - Using image docker.io/busybox:stable
	W1025 21:36:37.788663  669884 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36878->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.788699  669884 retry.go:31] will retry after 231.853642ms: ssh: handshake failed: read tcp 192.168.39.1:36878->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.789515  669884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:36:37.789534  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 21:36:37.789547  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.792083  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.792503  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.792535  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.792724  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.792881  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.793040  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.793173  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	W1025 21:36:37.793809  669884 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36886->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.793836  669884 retry.go:31] will retry after 246.018745ms: ssh: handshake failed: read tcp 192.168.39.1:36886->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.794644  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I1025 21:36:37.794986  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:37.795468  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:37.795481  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:37.795769  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:37.795974  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:37.797359  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:37.799230  669884 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 21:36:37.800580  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 21:36:37.800601  669884 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 21:36:37.800621  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:37.803681  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.804087  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:37.804120  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:37.804200  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:37.804336  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:37.804478  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:37.804584  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	W1025 21:36:37.805158  669884 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36898->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:37.805184  669884 retry.go:31] will retry after 207.690543ms: ssh: handshake failed: read tcp 192.168.39.1:36898->192.168.39.223:22: read: connection reset by peer
	I1025 21:36:38.046751  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 21:36:38.067936  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 21:36:38.138164  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 21:36:38.149792  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 21:36:38.149820  669884 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 21:36:38.199326  669884 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 21:36:38.199365  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1025 21:36:38.256505  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 21:36:38.267795  669884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 21:36:38.267820  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 21:36:38.273880  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 21:36:38.292887  669884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 21:36:38.292904  669884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 21:36:38.296460  669884 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 21:36:38.296487  669884 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 21:36:38.334733  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 21:36:38.334765  669884 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 21:36:38.390784  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 21:36:38.411777  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 21:36:38.446406  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 21:36:38.603498  669884 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:36:38.603526  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 21:36:38.627035  669884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 21:36:38.627062  669884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 21:36:38.641854  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 21:36:38.641895  669884 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 21:36:38.644192  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 21:36:38.646990  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 21:36:38.647012  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 21:36:38.650357  669884 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 21:36:38.650373  669884 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 21:36:38.799711  669884 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 21:36:38.799737  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 21:36:38.842104  669884 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 21:36:38.842138  669884 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 21:36:38.923930  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 21:36:38.923976  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 21:36:38.929891  669884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:36:38.929915  669884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 21:36:38.945464  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 21:36:38.956741  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 21:36:39.178149  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 21:36:39.198626  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 21:36:39.198682  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 21:36:39.212521  669884 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 21:36:39.212555  669884 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 21:36:39.468762  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 21:36:39.468804  669884 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 21:36:39.562296  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.51549801s)
	I1025 21:36:39.562378  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:39.562391  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:39.562736  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:39.562758  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:39.562768  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:39.562768  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:39.562778  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:39.563093  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:39.563097  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:39.563108  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:39.601643  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 21:36:39.601685  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 21:36:39.773794  669884 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:36:39.773823  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 21:36:39.917657  669884 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 21:36:39.917696  669884 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 21:36:39.947976  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.879994441s)
	I1025 21:36:39.948047  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:39.948060  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:39.948420  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:39.948476  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:39.948485  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:39.948499  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:39.948507  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:39.948998  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:39.949043  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:39.949049  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:40.199367  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:36:40.241745  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 21:36:40.241777  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 21:36:40.474480  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 21:36:40.474522  669884 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 21:36:40.913343  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.775135867s)
	I1025 21:36:40.913420  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:40.913434  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:40.913782  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:40.913796  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:40.913803  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:40.913813  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:40.913821  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:40.914063  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:40.914079  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:40.914088  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:40.931681  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 21:36:40.931705  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 21:36:41.089584  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 21:36:41.089613  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 21:36:41.321570  669884 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:36:41.321604  669884 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 21:36:41.605384  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 21:36:42.070383  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.813820406s)
	I1025 21:36:42.070412  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.796497873s)
	I1025 21:36:42.070451  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.070464  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.070511  669884 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.777585152s)
	I1025 21:36:42.070538  669884 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.777598326s)
	I1025 21:36:42.070561  669884 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1025 21:36:42.070462  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.070649  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.070843  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:42.070887  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.070895  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.070913  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.070921  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.071005  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:42.071013  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.071027  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.071056  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.071069  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.071332  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.071347  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.071498  669884 node_ready.go:35] waiting up to 6m0s for node "addons-413632" to be "Ready" ...
	I1025 21:36:42.071656  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.071671  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.092429  669884 node_ready.go:49] node "addons-413632" has status "Ready":"True"
	I1025 21:36:42.092455  669884 node_ready.go:38] duration metric: took 20.910347ms for node "addons-413632" to be "Ready" ...
	I1025 21:36:42.092467  669884 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:36:42.200324  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:42.200349  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:42.200824  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:42.200848  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:42.240379  669884 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace to be "Ready" ...
	I1025 21:36:42.613955  669884 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-413632" context rescaled to 1 replicas
	I1025 21:36:44.260740  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:44.810491  669884 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 21:36:44.810547  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:44.814221  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:44.814713  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:44.814761  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:44.814932  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:44.815168  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:44.815371  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:44.815534  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:45.448750  669884 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 21:36:45.653482  669884 addons.go:234] Setting addon gcp-auth=true in "addons-413632"
	I1025 21:36:45.653564  669884 host.go:66] Checking if "addons-413632" exists ...
	I1025 21:36:45.653986  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:45.654038  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:45.669867  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40301
	I1025 21:36:45.670414  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:45.671092  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:45.671117  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:45.671505  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:45.672009  669884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:36:45.672059  669884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:36:45.687213  669884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35147
	I1025 21:36:45.687784  669884 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:36:45.688332  669884 main.go:141] libmachine: Using API Version  1
	I1025 21:36:45.688362  669884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:36:45.688703  669884 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:36:45.688896  669884 main.go:141] libmachine: (addons-413632) Calling .GetState
	I1025 21:36:45.690497  669884 main.go:141] libmachine: (addons-413632) Calling .DriverName
	I1025 21:36:45.690728  669884 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 21:36:45.690754  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHHostname
	I1025 21:36:45.693423  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:45.693894  669884 main.go:141] libmachine: (addons-413632) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:f7:68", ip: ""} in network mk-addons-413632: {Iface:virbr1 ExpiryTime:2024-10-25 22:36:06 +0000 UTC Type:0 Mac:52:54:00:7e:f7:68 Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:addons-413632 Clientid:01:52:54:00:7e:f7:68}
	I1025 21:36:45.693920  669884 main.go:141] libmachine: (addons-413632) DBG | domain addons-413632 has defined IP address 192.168.39.223 and MAC address 52:54:00:7e:f7:68 in network mk-addons-413632
	I1025 21:36:45.694133  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHPort
	I1025 21:36:45.694289  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHKeyPath
	I1025 21:36:45.694453  669884 main.go:141] libmachine: (addons-413632) Calling .GetSSHUsername
	I1025 21:36:45.694613  669884 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/addons-413632/id_rsa Username:docker}
	I1025 21:36:46.171720  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.759899873s)
	I1025 21:36:46.171792  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.171805  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.171813  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.725370464s)
	I1025 21:36:46.172203  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172228  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172305  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.528075813s)
	I1025 21:36:46.172318  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.172339  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172351  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172378  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.172388  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.172396  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172503  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.226991786s)
	I1025 21:36:46.172531  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172545  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172612  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.172639  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.172655  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172662  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172782  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.781781741s)
	I1025 21:36:46.172797  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.215996495s)
	I1025 21:36:46.172815  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172829  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.172830  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.172842  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.173092  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.173105  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.173114  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.173123  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.173181  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.994956709s)
	I1025 21:36:46.173200  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.173211  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.173231  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.173430  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.173511  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.173518  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.173537  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.173543  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.173564  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.173679  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.974162211s)
	I1025 21:36:46.173694  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.173710  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	W1025 21:36:46.173720  669884 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:36:46.173764  669884 retry.go:31] will retry after 304.949065ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 21:36:46.174021  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.174034  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.174060  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.174129  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.174280  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.174289  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.174298  669884 addons.go:475] Verifying addon ingress=true in "addons-413632"
	I1025 21:36:46.174303  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.174313  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.174322  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.174336  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.174639  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.174650  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.174659  669884 addons.go:475] Verifying addon registry=true in "addons-413632"
	I1025 21:36:46.175142  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.175214  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.175233  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.175240  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.175754  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.175903  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.175926  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.175953  669884 addons.go:475] Verifying addon metrics-server=true in "addons-413632"
	I1025 21:36:46.176553  669884 out.go:177] * Verifying registry addon...
	I1025 21:36:46.177421  669884 out.go:177] * Verifying ingress addon...
	I1025 21:36:46.172402  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.179086  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.179112  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.179119  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.179900  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.179934  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.179949  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.179956  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.179963  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.180297  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:46.180365  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.180404  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.180878  669884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 21:36:46.181135  669884 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 21:36:46.181818  669884 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-413632 service yakd-dashboard -n yakd-dashboard
	
	I1025 21:36:46.199682  669884 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 21:36:46.199707  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:46.201641  669884 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 21:36:46.201659  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:46.245196  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:46.245223  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:46.245507  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:46.245528  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:46.479519  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 21:36:46.686397  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:46.687757  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:46.748362  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:47.226092  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:47.226246  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:47.701138  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:47.701169  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:47.994796  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.389333634s)
	I1025 21:36:47.994829  669884 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.304081148s)
	I1025 21:36:47.994872  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:47.994889  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:47.995163  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:47.995218  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:47.995235  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:47.995246  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:47.995703  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:47.995745  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:47.995770  669884 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-413632"
	I1025 21:36:47.997316  669884 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 21:36:47.997433  669884 out.go:177] * Verifying csi-hostpath-driver addon...
	I1025 21:36:47.999053  669884 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1025 21:36:47.999903  669884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 21:36:48.000716  669884 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 21:36:48.000735  669884 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 21:36:48.021039  669884 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 21:36:48.021066  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:48.153906  669884 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 21:36:48.153937  669884 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 21:36:48.185868  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:48.187785  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:48.218996  669884 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:36:48.219027  669884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 21:36:48.238091  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.758508808s)
	I1025 21:36:48.238166  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:48.238189  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:48.238464  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:48.238483  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:48.238493  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:48.238501  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:48.238837  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:48.238855  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:48.280049  669884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 21:36:48.505452  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:48.686320  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:48.686449  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:49.005035  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:49.193331  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:49.193742  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:49.266396  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:49.452054  669884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.17195323s)
	I1025 21:36:49.452112  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:49.452124  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:49.452467  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:49.452491  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:49.452500  669884 main.go:141] libmachine: Making call to close driver server
	I1025 21:36:49.452501  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:49.452508  669884 main.go:141] libmachine: (addons-413632) Calling .Close
	I1025 21:36:49.452728  669884 main.go:141] libmachine: (addons-413632) DBG | Closing plugin on server side
	I1025 21:36:49.452758  669884 main.go:141] libmachine: Successfully made call to close driver server
	I1025 21:36:49.452769  669884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 21:36:49.453814  669884 addons.go:475] Verifying addon gcp-auth=true in "addons-413632"
	I1025 21:36:49.455518  669884 out.go:177] * Verifying gcp-auth addon...
	I1025 21:36:49.458922  669884 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 21:36:49.527285  669884 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 21:36:49.527309  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:49.553333  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:49.698930  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:49.699541  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:49.962567  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:50.006150  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:50.187065  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:50.187152  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:50.465552  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:50.565068  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:50.686388  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:50.686444  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:50.962726  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:51.006882  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:51.187075  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:51.187097  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:51.468523  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:51.504915  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:51.687157  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:51.687801  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:51.747064  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:51.963162  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:52.005215  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:52.185114  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:52.186902  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:52.463821  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:52.505607  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:52.685670  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:52.685860  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:52.963246  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:53.004488  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:53.185560  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:53.185737  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:53.464796  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:53.505162  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:53.685492  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:53.685624  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:53.963411  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:54.005192  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:54.186114  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:54.186439  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:54.246683  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:54.463183  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:54.504876  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:54.685180  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:54.685201  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:54.962573  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:55.005612  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:55.185644  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:55.185723  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:55.463540  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:55.504772  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:55.687650  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:55.687874  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:55.963028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:56.004929  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:56.184722  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:56.185174  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:56.463305  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:56.504224  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:56.686830  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:56.686979  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:56.746795  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:56.963112  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:57.004973  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:57.185698  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:57.186239  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:57.471739  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:57.574143  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:57.685414  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:57.685804  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:57.963229  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:58.004448  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:58.185324  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:58.186017  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:58.464395  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:58.566576  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:58.684988  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:58.685275  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:58.747092  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:36:58.962027  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:59.005621  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:59.185869  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:59.186567  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:59.463707  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:36:59.504530  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:36:59.686848  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:36:59.687538  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:36:59.963423  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:00.004285  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:00.185874  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:00.186291  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:00.463518  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:00.565353  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:00.684714  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:00.685540  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:00.748468  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:00.962992  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:01.005353  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:01.185494  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:01.185838  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:01.462788  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:01.505087  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:01.685767  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:01.686059  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:01.962949  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:02.006370  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:02.186077  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:02.186675  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:02.463809  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:02.505942  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:02.685830  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:02.686645  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:02.962164  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:03.005381  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:03.185302  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:03.186445  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:03.246434  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:03.462813  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:03.505751  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:03.686957  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:03.687520  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:03.962039  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:04.008452  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:04.186326  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:04.186375  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:04.463172  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:04.995859  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:04.996067  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:04.996109  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:04.996452  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:05.005095  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:05.184832  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:05.185227  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:05.246717  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:05.463966  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:05.505445  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:05.685555  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:05.685887  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:05.962542  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:06.004811  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:06.186301  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:06.186469  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:06.463519  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:06.504752  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:06.686089  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:06.686477  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:06.962995  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:07.009097  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:07.185456  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:07.185702  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:07.469954  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:07.572805  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:07.685327  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:07.685982  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:07.747947  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:07.962881  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:08.004929  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:08.190593  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:08.190678  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:08.464117  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:08.505237  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:08.685120  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:08.686638  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:08.962894  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:09.005012  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:09.185937  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:09.186508  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:09.463079  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:09.504819  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:09.684709  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:09.685625  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:09.962861  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:10.005280  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:10.185354  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:10.185716  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:10.247110  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:10.463834  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:10.504653  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:10.686008  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:10.686675  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:10.964029  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:11.005495  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:11.189572  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:11.190102  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:11.464541  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:11.504473  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:11.686560  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:11.687994  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:11.962556  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:12.005908  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:12.185564  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:12.185923  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:12.247206  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:12.464806  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:12.504816  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:12.686479  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:12.686674  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:12.962832  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:13.004859  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:13.185723  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:13.185942  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:13.464043  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:13.504913  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:13.686203  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:13.686447  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:13.963087  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:14.005456  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:14.185482  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:14.185725  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:14.247360  669884 pod_ready.go:103] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"False"
	I1025 21:37:14.465142  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:14.505355  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:14.686639  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:14.686920  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:14.963004  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:15.005220  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:15.185061  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:15.186269  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:15.464985  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:15.505642  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:15.687211  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:15.687752  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:15.962994  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:16.006502  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:16.185176  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:16.185529  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:16.465028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:16.505068  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:16.686517  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:16.688226  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:16.748007  669884 pod_ready.go:93] pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.748034  669884 pod_ready.go:82] duration metric: took 34.507625324s for pod "amd-gpu-device-plugin-967pw" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.748044  669884 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9bd5k" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.755175  669884 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-9bd5k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9bd5k" not found
	I1025 21:37:16.755203  669884 pod_ready.go:82] duration metric: took 7.152705ms for pod "coredns-7c65d6cfc9-9bd5k" in "kube-system" namespace to be "Ready" ...
	E1025 21:37:16.755214  669884 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-9bd5k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9bd5k" not found
	I1025 21:37:16.755221  669884 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9tqzw" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.764267  669884 pod_ready.go:93] pod "coredns-7c65d6cfc9-9tqzw" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.764290  669884 pod_ready.go:82] duration metric: took 9.063153ms for pod "coredns-7c65d6cfc9-9tqzw" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.764300  669884 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.777620  669884 pod_ready.go:93] pod "etcd-addons-413632" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.777648  669884 pod_ready.go:82] duration metric: took 13.338735ms for pod "etcd-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.777661  669884 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.781949  669884 pod_ready.go:93] pod "kube-apiserver-addons-413632" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.781965  669884 pod_ready.go:82] duration metric: took 4.290302ms for pod "kube-apiserver-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.781974  669884 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.945055  669884 pod_ready.go:93] pod "kube-controller-manager-addons-413632" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:16.945081  669884 pod_ready.go:82] duration metric: took 163.101197ms for pod "kube-controller-manager-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.945095  669884 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jg272" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:16.963259  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:17.004047  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:17.184620  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:17.184934  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:17.344309  669884 pod_ready.go:93] pod "kube-proxy-jg272" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:17.344334  669884 pod_ready.go:82] duration metric: took 399.232835ms for pod "kube-proxy-jg272" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:17.344346  669884 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:17.465557  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:17.504440  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:17.685992  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:17.686117  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:17.743994  669884 pod_ready.go:93] pod "kube-scheduler-addons-413632" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:17.744023  669884 pod_ready.go:82] duration metric: took 399.669334ms for pod "kube-scheduler-addons-413632" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:17.744038  669884 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-k298m" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:17.962857  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:18.414049  669884 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-k298m" in "kube-system" namespace has status "Ready":"True"
	I1025 21:37:18.414076  669884 pod_ready.go:82] duration metric: took 670.03064ms for pod "nvidia-device-plugin-daemonset-k298m" in "kube-system" namespace to be "Ready" ...
	I1025 21:37:18.414085  669884 pod_ready.go:39] duration metric: took 36.321608322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 21:37:18.414106  669884 api_server.go:52] waiting for apiserver process to appear ...
	I1025 21:37:18.414170  669884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:37:18.419042  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:18.419419  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:18.419433  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:18.448353  669884 api_server.go:72] duration metric: took 40.829819368s to wait for apiserver process to appear ...
	I1025 21:37:18.448386  669884 api_server.go:88] waiting for apiserver healthz status ...
	I1025 21:37:18.448409  669884 api_server.go:253] Checking apiserver healthz at https://192.168.39.223:8443/healthz ...
	I1025 21:37:18.452931  669884 api_server.go:279] https://192.168.39.223:8443/healthz returned 200:
	ok
	I1025 21:37:18.454176  669884 api_server.go:141] control plane version: v1.31.1
	I1025 21:37:18.454210  669884 api_server.go:131] duration metric: took 5.81756ms to wait for apiserver health ...
	I1025 21:37:18.454219  669884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 21:37:18.462180  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:18.463953  669884 system_pods.go:59] 18 kube-system pods found
	I1025 21:37:18.463985  669884 system_pods.go:61] "amd-gpu-device-plugin-967pw" [cdd329aa-b9f0-4233-b2ab-db63265d7d0c] Running
	I1025 21:37:18.463990  669884 system_pods.go:61] "coredns-7c65d6cfc9-9tqzw" [88e7f6a7-96fd-4c16-b0df-4feb71acbfe4] Running
	I1025 21:37:18.463997  669884 system_pods.go:61] "csi-hostpath-attacher-0" [0a815931-e689-4cde-b86e-48ce8d155a06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 21:37:18.464006  669884 system_pods.go:61] "csi-hostpath-resizer-0" [b9c13546-2b70-4d29-a94b-c906bb7cab5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:37:18.464016  669884 system_pods.go:61] "csi-hostpathplugin-dp8sx" [eb7167c1-6de0-4a01-b052-10f732186a02] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:37:18.464022  669884 system_pods.go:61] "etcd-addons-413632" [5a85e992-3ea8-4882-a10d-4b3af5a577de] Running
	I1025 21:37:18.464028  669884 system_pods.go:61] "kube-apiserver-addons-413632" [dfbfa04d-4f8a-439a-bdf8-ce150e0511d6] Running
	I1025 21:37:18.464032  669884 system_pods.go:61] "kube-controller-manager-addons-413632" [f7d9dfe4-d9e4-4bfd-9767-ec5521fe89c9] Running
	I1025 21:37:18.464038  669884 system_pods.go:61] "kube-ingress-dns-minikube" [1fcf268d-cd8a-41eb-aeca-eb9e2bd5a187] Running
	I1025 21:37:18.464042  669884 system_pods.go:61] "kube-proxy-jg272" [d3a14441-9149-4a18-b5d6-06302835d38b] Running
	I1025 21:37:18.464046  669884 system_pods.go:61] "kube-scheduler-addons-413632" [9634b8e0-0e8a-4983-907d-c1bd095f3cc8] Running
	I1025 21:37:18.464057  669884 system_pods.go:61] "metrics-server-84c5f94fbc-7drm7" [9dd37623-d67c-48a2-8e11-18a05cd71be2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 21:37:18.464066  669884 system_pods.go:61] "nvidia-device-plugin-daemonset-k298m" [b318342e-76c3-477e-8d99-38359ebef6bf] Running
	I1025 21:37:18.464072  669884 system_pods.go:61] "registry-66c9cd494c-xj8xz" [e20b3155-ea05-4981-a773-3c2c98521771] Running
	I1025 21:37:18.464083  669884 system_pods.go:61] "registry-proxy-kpm4c" [211d5f74-7b9d-4d8c-bcdb-bce343e97d06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:37:18.464095  669884 system_pods.go:61] "snapshot-controller-56fcc65765-d6wjv" [cfcf8f38-ae62-4726-9acd-d9813a6a11e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:37:18.464104  669884 system_pods.go:61] "snapshot-controller-56fcc65765-f8nh5" [73b07a02-551a-4a03-b0f4-a0f1d7dde2b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:37:18.464109  669884 system_pods.go:61] "storage-provisioner" [f755426f-779c-44a0-9058-958be3222114] Running
	I1025 21:37:18.464121  669884 system_pods.go:74] duration metric: took 9.895132ms to wait for pod list to return data ...
	I1025 21:37:18.464132  669884 default_sa.go:34] waiting for default service account to be created ...
	I1025 21:37:18.504680  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:18.544274  669884 default_sa.go:45] found service account: "default"
	I1025 21:37:18.544307  669884 default_sa.go:55] duration metric: took 80.16714ms for default service account to be created ...
	I1025 21:37:18.544318  669884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 21:37:18.694764  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:18.694960  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:18.800516  669884 system_pods.go:86] 18 kube-system pods found
	I1025 21:37:18.800558  669884 system_pods.go:89] "amd-gpu-device-plugin-967pw" [cdd329aa-b9f0-4233-b2ab-db63265d7d0c] Running
	I1025 21:37:18.800573  669884 system_pods.go:89] "coredns-7c65d6cfc9-9tqzw" [88e7f6a7-96fd-4c16-b0df-4feb71acbfe4] Running
	I1025 21:37:18.800582  669884 system_pods.go:89] "csi-hostpath-attacher-0" [0a815931-e689-4cde-b86e-48ce8d155a06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 21:37:18.800649  669884 system_pods.go:89] "csi-hostpath-resizer-0" [b9c13546-2b70-4d29-a94b-c906bb7cab5e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 21:37:18.800674  669884 system_pods.go:89] "csi-hostpathplugin-dp8sx" [eb7167c1-6de0-4a01-b052-10f732186a02] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 21:37:18.800682  669884 system_pods.go:89] "etcd-addons-413632" [5a85e992-3ea8-4882-a10d-4b3af5a577de] Running
	I1025 21:37:18.800691  669884 system_pods.go:89] "kube-apiserver-addons-413632" [dfbfa04d-4f8a-439a-bdf8-ce150e0511d6] Running
	I1025 21:37:18.800702  669884 system_pods.go:89] "kube-controller-manager-addons-413632" [f7d9dfe4-d9e4-4bfd-9767-ec5521fe89c9] Running
	I1025 21:37:18.800715  669884 system_pods.go:89] "kube-ingress-dns-minikube" [1fcf268d-cd8a-41eb-aeca-eb9e2bd5a187] Running
	I1025 21:37:18.800722  669884 system_pods.go:89] "kube-proxy-jg272" [d3a14441-9149-4a18-b5d6-06302835d38b] Running
	I1025 21:37:18.800731  669884 system_pods.go:89] "kube-scheduler-addons-413632" [9634b8e0-0e8a-4983-907d-c1bd095f3cc8] Running
	I1025 21:37:18.800740  669884 system_pods.go:89] "metrics-server-84c5f94fbc-7drm7" [9dd37623-d67c-48a2-8e11-18a05cd71be2] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 21:37:18.800750  669884 system_pods.go:89] "nvidia-device-plugin-daemonset-k298m" [b318342e-76c3-477e-8d99-38359ebef6bf] Running
	I1025 21:37:18.800757  669884 system_pods.go:89] "registry-66c9cd494c-xj8xz" [e20b3155-ea05-4981-a773-3c2c98521771] Running
	I1025 21:37:18.800771  669884 system_pods.go:89] "registry-proxy-kpm4c" [211d5f74-7b9d-4d8c-bcdb-bce343e97d06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 21:37:18.800784  669884 system_pods.go:89] "snapshot-controller-56fcc65765-d6wjv" [cfcf8f38-ae62-4726-9acd-d9813a6a11e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:37:18.800797  669884 system_pods.go:89] "snapshot-controller-56fcc65765-f8nh5" [73b07a02-551a-4a03-b0f4-a0f1d7dde2b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 21:37:18.800803  669884 system_pods.go:89] "storage-provisioner" [f755426f-779c-44a0-9058-958be3222114] Running
	I1025 21:37:18.800814  669884 system_pods.go:126] duration metric: took 256.487942ms to wait for k8s-apps to be running ...
	I1025 21:37:18.800827  669884 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 21:37:18.800884  669884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:37:18.814841  669884 system_svc.go:56] duration metric: took 14.005631ms WaitForService to wait for kubelet
	I1025 21:37:18.814874  669884 kubeadm.go:582] duration metric: took 41.196346797s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 21:37:18.814898  669884 node_conditions.go:102] verifying NodePressure condition ...
	I1025 21:37:18.944374  669884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 21:37:18.944403  669884 node_conditions.go:123] node cpu capacity is 2
	I1025 21:37:18.944415  669884 node_conditions.go:105] duration metric: took 129.510826ms to run NodePressure ...
	I1025 21:37:18.944427  669884 start.go:241] waiting for startup goroutines ...
	I1025 21:37:18.961934  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:19.004629  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:19.186640  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:19.187487  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:19.466592  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:19.504272  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:19.685785  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:19.686505  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:19.966530  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:20.004324  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:20.187313  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:20.187337  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:20.462304  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:20.504298  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:20.685505  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:20.685906  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:20.963286  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:21.005022  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:21.186557  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:21.186723  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:21.465127  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:21.504699  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:21.685886  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:21.686061  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:21.963016  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:22.006000  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:22.185555  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:22.186072  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:22.466115  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:22.506457  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:22.685879  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:22.687727  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:22.963299  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:23.004995  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:23.185551  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:23.185968  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:23.865885  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:23.866062  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:23.866355  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:23.866568  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:23.963413  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:24.004711  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:24.185883  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:24.186723  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:24.464383  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:24.504365  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:24.685073  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:24.686060  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:24.962865  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:25.006690  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:25.185033  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:25.185232  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:25.464883  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:25.504646  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:25.685415  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:25.686723  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:25.962853  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:26.005234  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:26.185694  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:26.186058  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:26.464716  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:26.504476  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:26.686480  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:26.686949  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:26.964396  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:27.005133  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:27.186028  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:27.186357  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:27.465719  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:27.505392  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:27.685557  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:27.686484  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:27.962519  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:28.004368  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:28.185840  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:28.186254  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:28.465758  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:28.504829  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:28.685304  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:28.686274  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:28.963308  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:29.004680  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:29.186130  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:29.186632  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:29.464402  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:29.504338  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:29.685332  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:29.685385  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:29.962680  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:30.004399  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:30.187059  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:30.187543  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:30.463464  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:30.504618  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:30.685540  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:30.686371  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:30.962043  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:31.005510  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:31.185085  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:31.185913  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:31.465950  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:31.567422  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:31.686288  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:31.686379  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 21:37:31.962919  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:32.005490  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:32.186644  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:32.186718  669884 kapi.go:107] duration metric: took 46.005837011s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 21:37:32.462934  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:32.504566  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:32.685652  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:32.964223  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:33.011820  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:33.185922  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:33.462864  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:33.504869  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:33.686368  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:33.963534  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:34.065483  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:34.186073  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:34.479551  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:34.506200  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:34.685820  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:34.961995  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:35.005522  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:35.186432  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:35.462812  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:35.505250  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:35.686097  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:35.963661  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:36.004592  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:36.186012  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:36.465131  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:36.505230  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:36.685859  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:36.964849  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:37.007003  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:37.185720  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:37.462955  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:37.505045  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:37.685537  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:38.247437  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:38.247880  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:38.250090  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:38.470490  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:38.507376  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:38.685885  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:38.962644  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:39.004183  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:39.185641  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:39.462028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:39.504736  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:39.686029  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:39.962633  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:40.008440  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:40.187238  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:40.463437  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:40.565280  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:40.687840  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:40.963174  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:41.005012  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:41.185368  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:41.466695  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:41.505336  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:41.685914  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:41.963343  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:42.008000  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:42.186052  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:42.463010  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:42.506342  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:42.685671  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:42.962967  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:43.004826  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:43.186849  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:43.464028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:43.505585  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:43.945995  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:43.962505  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:44.004776  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:44.187249  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:44.468358  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:44.504184  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:44.685092  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:44.962745  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:45.004905  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:45.185217  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:45.465499  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:45.504175  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:45.685943  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:45.965052  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:46.063729  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:46.186352  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:46.462926  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:46.504816  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:46.686674  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:46.963779  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:47.008081  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:47.193998  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:47.464742  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:47.504856  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:47.684972  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:47.962940  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:48.005198  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:48.187097  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:48.466254  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:48.505773  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:48.689742  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:48.963898  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:49.065355  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:49.191993  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:49.465028  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:49.505302  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:49.691463  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:49.965147  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:50.005607  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:50.185623  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:50.462684  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:50.504588  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:50.685916  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:50.962599  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:51.004663  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:51.185329  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:51.465723  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:51.505015  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:51.691392  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:51.964018  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:52.005194  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:52.185871  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:52.463688  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:52.505058  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:52.685320  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:52.965559  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:53.006048  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:53.186971  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:53.463311  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:53.506823  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:53.686464  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:53.963192  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:54.005912  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:54.185427  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:54.463796  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:54.505152  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:54.685666  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:54.963616  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:55.005483  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:55.186176  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:55.462706  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:55.504851  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:55.686635  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:55.962960  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:56.005017  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:56.186488  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:56.464821  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:56.505075  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:56.684892  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:56.962694  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:57.004831  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:57.185719  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:57.464857  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:57.504391  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:57.685960  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:57.963361  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:58.004880  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:58.186839  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:58.813120  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:58.813259  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:58.813429  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:58.963265  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:59.069508  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:59.185794  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:59.468618  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:37:59.519849  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:37:59.687100  669884 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 21:37:59.962796  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:00.004761  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:38:00.186531  669884 kapi.go:107] duration metric: took 1m14.005390978s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 21:38:00.462339  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:00.506493  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:38:00.962631  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:01.004816  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 21:38:01.466333  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:01.504204  669884 kapi.go:107] duration metric: took 1m13.504297233s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 21:38:01.963877  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:02.466828  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:02.962581  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:03.463069  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:03.962824  669884 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 21:38:04.463167  669884 kapi.go:107] duration metric: took 1m15.004246383s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 21:38:04.465316  669884 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-413632 cluster.
	I1025 21:38:04.466837  669884 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 21:38:04.468223  669884 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 21:38:04.469684  669884 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, ingress-dns, default-storageclass, inspektor-gadget, metrics-server, storage-provisioner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1025 21:38:04.471028  669884 addons.go:510] duration metric: took 1m26.852485407s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner ingress-dns default-storageclass inspektor-gadget metrics-server storage-provisioner yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1025 21:38:04.471073  669884 start.go:246] waiting for cluster config update ...
	I1025 21:38:04.471096  669884 start.go:255] writing updated cluster config ...
	I1025 21:38:04.471380  669884 ssh_runner.go:195] Run: rm -f paused
	I1025 21:38:04.523185  669884 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 21:38:04.524936  669884 out.go:177] * Done! kubectl is now configured to use "addons-413632" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.745875443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892671745853200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2819ad8-cc3e-4061-89bf-5e76081a22e1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.746291574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fd9a194-368c-430d-ab96-04d300dece0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.746341771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fd9a194-368c-430d-ab96-04d300dece0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.746697080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f9cd8b43e6a408f6e67b8e30231a72e77348d2ee856c7746d699eaa7ebe9e02,PodSandboxId:356c006482af48de267031fe0f42781fe15be81b2805a57bf75883897863b507,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1729892478883178178,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-n7dj7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 375477b2-b00d-4105-be11-b2caab094c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977d398946a4ad83b12143788ef1a6022cd040aa4968cc7dd6642b0f22e6772f,PodSandboxId:9fa681ce0b8a9b34ae6515e73eece8d706c1d65b857230f1ff8347c919bdd8f9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729892337105974621,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6f49bba-8fb4-4037-8ccc-f07fcab0a94d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea98b83c409c23d1000bf585061118a404b4300c4c4a99241371020d65bce89a,PodSandboxId:85115ae19e3d07bf0a5c5dfe9ffd41781512b3eb6d999b46d155d4365054d069,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729892290509733401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a5d80ed-a009-46e7-b
426-d6655a8413e2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec8292992674b49697d6af126040a9da68ef44df2783c1c4b7861157cf32ad,PodSandboxId:bc1b7713e38da856981f71dc33cb552cd5cbaaa712d91a361e881c48c45808df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729892238523121117,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7drm7,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9dd37623-d67c-48a2-8e11-18a05cd71be2,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238e6338b162c61e6e2a8be58e78ed840f82fafa4d3c248d31cd6bc72b89029,PodSandboxId:c8ec7d7eeb3613a3013e2cd1fa0dfe5433800a385e662f5bfd034d742d26f241,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729892236492648812,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-967pw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdd329aa-b9f0-4233-b2ab-db63265d7d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2337a9243bcac04881850da120ebdfa6c2851c259599356474081ac3b2158e43,PodSandboxId:d15dbd0de78cb98a395bc77504bf63eb961a10dcc2ac61f8dbd34f5440e02a6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729892205309269277,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755426f-779c-44a0-9058-958be3222114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6101327a566b0c3a,PodSandboxId:970d9b785a131bc99ba20c817488b1cbe94db7c280dae9ff9f815efcfd34edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729892201198829203,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9tqzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88e7f6a7-96fd-4c16-b0df-4feb71acbfe4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da,PodSandboxId:029a9b9d3282dc2c554858f577819c81515e51f522f39b19c1141a775fcead1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729892198730490551,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a14441-9149-4a18-b5d6-06302835d38b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3,PodSandboxId:db17ada6357554fdef17312bf4e018fe3c8641502b50f47b8148eb0f0ac7f973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729892187158359640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2cf953e687acfe867d6014b05d5d522,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57,PodSandboxId:8f5d32b33d448bc2246a8a59dc93ddc90a60fbdfbeedb2bf901ef7429a002a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea630022
89416a13b,State:CONTAINER_RUNNING,CreatedAt:1729892187171984825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafba3f19dbe195d5c700a798f157236,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60,PodSandboxId:6a08fd265dc1ef5a831e28b3826b2dbd585a1180ac5f60e131e381f52290f29b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd
1,State:CONTAINER_RUNNING,CreatedAt:1729892187154286284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49439bc3fb05a292c1410f73bc723b0d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b,PodSandboxId:533122eec836066754f96a6cc9443c062696eeefe6e502b0ff2f6461ed360e56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1a
ee,State:CONTAINER_RUNNING,CreatedAt:1729892187146421178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1851d47b017f83b9130a6443875b8957,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fd9a194-368c-430d-ab96-04d300dece0d name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.784174465Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=407733b8-8f99-4402-b4ae-1c8f9074cda0 name=/runtime.v1.RuntimeService/Version
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.784265961Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=407733b8-8f99-4402-b4ae-1c8f9074cda0 name=/runtime.v1.RuntimeService/Version
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.785537313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ecd922d-5a1e-4a93-90d0-99e0b5041f89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.786871124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892671786842549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ecd922d-5a1e-4a93-90d0-99e0b5041f89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.787520271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbf879be-101a-48f7-927c-091a5e0da531 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.787630601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbf879be-101a-48f7-927c-091a5e0da531 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.788673789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f9cd8b43e6a408f6e67b8e30231a72e77348d2ee856c7746d699eaa7ebe9e02,PodSandboxId:356c006482af48de267031fe0f42781fe15be81b2805a57bf75883897863b507,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1729892478883178178,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-n7dj7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 375477b2-b00d-4105-be11-b2caab094c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977d398946a4ad83b12143788ef1a6022cd040aa4968cc7dd6642b0f22e6772f,PodSandboxId:9fa681ce0b8a9b34ae6515e73eece8d706c1d65b857230f1ff8347c919bdd8f9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729892337105974621,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6f49bba-8fb4-4037-8ccc-f07fcab0a94d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea98b83c409c23d1000bf585061118a404b4300c4c4a99241371020d65bce89a,PodSandboxId:85115ae19e3d07bf0a5c5dfe9ffd41781512b3eb6d999b46d155d4365054d069,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729892290509733401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a5d80ed-a009-46e7-b
426-d6655a8413e2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec8292992674b49697d6af126040a9da68ef44df2783c1c4b7861157cf32ad,PodSandboxId:bc1b7713e38da856981f71dc33cb552cd5cbaaa712d91a361e881c48c45808df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729892238523121117,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7drm7,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9dd37623-d67c-48a2-8e11-18a05cd71be2,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238e6338b162c61e6e2a8be58e78ed840f82fafa4d3c248d31cd6bc72b89029,PodSandboxId:c8ec7d7eeb3613a3013e2cd1fa0dfe5433800a385e662f5bfd034d742d26f241,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729892236492648812,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-967pw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdd329aa-b9f0-4233-b2ab-db63265d7d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2337a9243bcac04881850da120ebdfa6c2851c259599356474081ac3b2158e43,PodSandboxId:d15dbd0de78cb98a395bc77504bf63eb961a10dcc2ac61f8dbd34f5440e02a6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729892205309269277,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755426f-779c-44a0-9058-958be3222114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6101327a566b0c3a,PodSandboxId:970d9b785a131bc99ba20c817488b1cbe94db7c280dae9ff9f815efcfd34edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729892201198829203,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9tqzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88e7f6a7-96fd-4c16-b0df-4feb71acbfe4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da,PodSandboxId:029a9b9d3282dc2c554858f577819c81515e51f522f39b19c1141a775fcead1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729892198730490551,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a14441-9149-4a18-b5d6-06302835d38b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3,PodSandboxId:db17ada6357554fdef17312bf4e018fe3c8641502b50f47b8148eb0f0ac7f973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729892187158359640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2cf953e687acfe867d6014b05d5d522,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57,PodSandboxId:8f5d32b33d448bc2246a8a59dc93ddc90a60fbdfbeedb2bf901ef7429a002a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea630022
89416a13b,State:CONTAINER_RUNNING,CreatedAt:1729892187171984825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafba3f19dbe195d5c700a798f157236,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60,PodSandboxId:6a08fd265dc1ef5a831e28b3826b2dbd585a1180ac5f60e131e381f52290f29b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd
1,State:CONTAINER_RUNNING,CreatedAt:1729892187154286284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49439bc3fb05a292c1410f73bc723b0d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b,PodSandboxId:533122eec836066754f96a6cc9443c062696eeefe6e502b0ff2f6461ed360e56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1a
ee,State:CONTAINER_RUNNING,CreatedAt:1729892187146421178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1851d47b017f83b9130a6443875b8957,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbf879be-101a-48f7-927c-091a5e0da531 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.831615382Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb2a12e5-dac3-4fe4-8f29-af4661b92afa name=/runtime.v1.RuntimeService/Version
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.831706640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb2a12e5-dac3-4fe4-8f29-af4661b92afa name=/runtime.v1.RuntimeService/Version
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.832826681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1deeb9e9-bb04-4cb6-a2df-6b5e0eca9b49 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.834526567Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892671834497831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1deeb9e9-bb04-4cb6-a2df-6b5e0eca9b49 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.835144429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2f6a242-00d4-4529-bc64-e61c7da814a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.835197411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2f6a242-00d4-4529-bc64-e61c7da814a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.835784919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f9cd8b43e6a408f6e67b8e30231a72e77348d2ee856c7746d699eaa7ebe9e02,PodSandboxId:356c006482af48de267031fe0f42781fe15be81b2805a57bf75883897863b507,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1729892478883178178,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-n7dj7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 375477b2-b00d-4105-be11-b2caab094c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977d398946a4ad83b12143788ef1a6022cd040aa4968cc7dd6642b0f22e6772f,PodSandboxId:9fa681ce0b8a9b34ae6515e73eece8d706c1d65b857230f1ff8347c919bdd8f9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729892337105974621,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6f49bba-8fb4-4037-8ccc-f07fcab0a94d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea98b83c409c23d1000bf585061118a404b4300c4c4a99241371020d65bce89a,PodSandboxId:85115ae19e3d07bf0a5c5dfe9ffd41781512b3eb6d999b46d155d4365054d069,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729892290509733401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a5d80ed-a009-46e7-b
426-d6655a8413e2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec8292992674b49697d6af126040a9da68ef44df2783c1c4b7861157cf32ad,PodSandboxId:bc1b7713e38da856981f71dc33cb552cd5cbaaa712d91a361e881c48c45808df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729892238523121117,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7drm7,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9dd37623-d67c-48a2-8e11-18a05cd71be2,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238e6338b162c61e6e2a8be58e78ed840f82fafa4d3c248d31cd6bc72b89029,PodSandboxId:c8ec7d7eeb3613a3013e2cd1fa0dfe5433800a385e662f5bfd034d742d26f241,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729892236492648812,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-967pw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdd329aa-b9f0-4233-b2ab-db63265d7d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2337a9243bcac04881850da120ebdfa6c2851c259599356474081ac3b2158e43,PodSandboxId:d15dbd0de78cb98a395bc77504bf63eb961a10dcc2ac61f8dbd34f5440e02a6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729892205309269277,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755426f-779c-44a0-9058-958be3222114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6101327a566b0c3a,PodSandboxId:970d9b785a131bc99ba20c817488b1cbe94db7c280dae9ff9f815efcfd34edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729892201198829203,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9tqzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88e7f6a7-96fd-4c16-b0df-4feb71acbfe4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da,PodSandboxId:029a9b9d3282dc2c554858f577819c81515e51f522f39b19c1141a775fcead1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729892198730490551,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a14441-9149-4a18-b5d6-06302835d38b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3,PodSandboxId:db17ada6357554fdef17312bf4e018fe3c8641502b50f47b8148eb0f0ac7f973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729892187158359640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2cf953e687acfe867d6014b05d5d522,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57,PodSandboxId:8f5d32b33d448bc2246a8a59dc93ddc90a60fbdfbeedb2bf901ef7429a002a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea630022
89416a13b,State:CONTAINER_RUNNING,CreatedAt:1729892187171984825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafba3f19dbe195d5c700a798f157236,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60,PodSandboxId:6a08fd265dc1ef5a831e28b3826b2dbd585a1180ac5f60e131e381f52290f29b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd
1,State:CONTAINER_RUNNING,CreatedAt:1729892187154286284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49439bc3fb05a292c1410f73bc723b0d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b,PodSandboxId:533122eec836066754f96a6cc9443c062696eeefe6e502b0ff2f6461ed360e56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1a
ee,State:CONTAINER_RUNNING,CreatedAt:1729892187146421178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1851d47b017f83b9130a6443875b8957,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2f6a242-00d4-4529-bc64-e61c7da814a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.867801878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3e464d4-6589-4892-a22d-b7c32d704794 name=/runtime.v1.RuntimeService/Version
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.867894887Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3e464d4-6589-4892-a22d-b7c32d704794 name=/runtime.v1.RuntimeService/Version
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.868909202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43084e90-ed86-4e07-acc8-c582359b1c3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.870269046Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892671870242515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43084e90-ed86-4e07-acc8-c582359b1c3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.870860516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8080844-062b-4db2-bd0c-ceec030b4365 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.870930008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8080844-062b-4db2-bd0c-ceec030b4365 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 21:44:31 addons-413632 crio[661]: time="2024-10-25 21:44:31.871183650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5f9cd8b43e6a408f6e67b8e30231a72e77348d2ee856c7746d699eaa7ebe9e02,PodSandboxId:356c006482af48de267031fe0f42781fe15be81b2805a57bf75883897863b507,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1729892478883178178,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-n7dj7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 375477b2-b00d-4105-be11-b2caab094c85,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:977d398946a4ad83b12143788ef1a6022cd040aa4968cc7dd6642b0f22e6772f,PodSandboxId:9fa681ce0b8a9b34ae6515e73eece8d706c1d65b857230f1ff8347c919bdd8f9,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1729892337105974621,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6f49bba-8fb4-4037-8ccc-f07fcab0a94d,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea98b83c409c23d1000bf585061118a404b4300c4c4a99241371020d65bce89a,PodSandboxId:85115ae19e3d07bf0a5c5dfe9ffd41781512b3eb6d999b46d155d4365054d069,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1729892290509733401,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a5d80ed-a009-46e7-b
426-d6655a8413e2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baec8292992674b49697d6af126040a9da68ef44df2783c1c4b7861157cf32ad,PodSandboxId:bc1b7713e38da856981f71dc33cb552cd5cbaaa712d91a361e881c48c45808df,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1729892238523121117,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-7drm7,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9dd37623-d67c-48a2-8e11-18a05cd71be2,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6238e6338b162c61e6e2a8be58e78ed840f82fafa4d3c248d31cd6bc72b89029,PodSandboxId:c8ec7d7eeb3613a3013e2cd1fa0dfe5433800a385e662f5bfd034d742d26f241,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1729892236492648812,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-967pw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdd329aa-b9f0-4233-b2ab-db63265d7d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2337a9243bcac04881850da120ebdfa6c2851c259599356474081ac3b2158e43,PodSandboxId:d15dbd0de78cb98a395bc77504bf63eb961a10dcc2ac61f8dbd34f5440e02a6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729892205309269277,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f755426f-779c-44a0-9058-958be3222114,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6101327a566b0c3a,PodSandboxId:970d9b785a131bc99ba20c817488b1cbe94db7c280dae9ff9f815efcfd34edf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729892201198829203,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9tqzw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88e7f6a7-96fd-4c16-b0df-4feb71acbfe4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da,PodSandboxId:029a9b9d3282dc2c554858f577819c81515e51f522f39b19c1141a775fcead1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729892198730490551,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jg272,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3a14441-9149-4a18-b5d6-06302835d38b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3,PodSandboxId:db17ada6357554fdef17312bf4e018fe3c8641502b50f47b8148eb0f0ac7f973,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1729892187158359640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2cf953e687acfe867d6014b05d5d522,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57,PodSandboxId:8f5d32b33d448bc2246a8a59dc93ddc90a60fbdfbeedb2bf901ef7429a002a39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea630022
89416a13b,State:CONTAINER_RUNNING,CreatedAt:1729892187171984825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafba3f19dbe195d5c700a798f157236,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60,PodSandboxId:6a08fd265dc1ef5a831e28b3826b2dbd585a1180ac5f60e131e381f52290f29b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd
1,State:CONTAINER_RUNNING,CreatedAt:1729892187154286284,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49439bc3fb05a292c1410f73bc723b0d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b,PodSandboxId:533122eec836066754f96a6cc9443c062696eeefe6e502b0ff2f6461ed360e56,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1a
ee,State:CONTAINER_RUNNING,CreatedAt:1729892187146421178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-413632,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1851d47b017f83b9130a6443875b8957,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8080844-062b-4db2-bd0c-ceec030b4365 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5f9cd8b43e6a4       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   356c006482af4       hello-world-app-55bf9c44b4-n7dj7
	977d398946a4a       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   9fa681ce0b8a9       nginx
	ea98b83c409c2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   85115ae19e3d0       busybox
	baec829299267       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   bc1b7713e38da       metrics-server-84c5f94fbc-7drm7
	6238e6338b162       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                7 minutes ago       Running             amd-gpu-device-plugin     0                   c8ec7d7eeb361       amd-gpu-device-plugin-967pw
	2337a9243bcac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   d15dbd0de78cb       storage-provisioner
	5de2757df2702       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   970d9b785a131       coredns-7c65d6cfc9-9tqzw
	641998da4d5c9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        7 minutes ago       Running             kube-proxy                0                   029a9b9d3282d       kube-proxy-jg272
	db634e56fb345       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        8 minutes ago       Running             kube-scheduler            0                   8f5d32b33d448       kube-scheduler-addons-413632
	7255e811190fe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   db17ada635755       etcd-addons-413632
	3fbd91729a37e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        8 minutes ago       Running             kube-controller-manager   0                   6a08fd265dc1e       kube-controller-manager-addons-413632
	ae973967b4de2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        8 minutes ago       Running             kube-apiserver            0                   533122eec8360       kube-apiserver-addons-413632
	
	
	==> coredns [5de2757df270253d3b8f7aad1eb5918450aad5bbec5cdbae6101327a566b0c3a] <==
	[INFO] 10.244.0.22:49100 - 37720 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000097395s
	[INFO] 10.244.0.22:49100 - 6528 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072093s
	[INFO] 10.244.0.22:49100 - 35595 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069809s
	[INFO] 10.244.0.22:49100 - 51959 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000091095s
	[INFO] 10.244.0.22:59921 - 50010 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000093863s
	[INFO] 10.244.0.22:59921 - 8990 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000086575s
	[INFO] 10.244.0.22:59921 - 32133 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070356s
	[INFO] 10.244.0.22:59921 - 18175 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062131s
	[INFO] 10.244.0.22:59921 - 37532 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005212s
	[INFO] 10.244.0.22:59921 - 53034 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043663s
	[INFO] 10.244.0.22:59921 - 45425 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060769s
	[INFO] 10.244.0.22:49372 - 60701 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000090403s
	[INFO] 10.244.0.22:35379 - 14937 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069549s
	[INFO] 10.244.0.22:49372 - 38777 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061578s
	[INFO] 10.244.0.22:49372 - 35199 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065467s
	[INFO] 10.244.0.22:35379 - 62834 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054012s
	[INFO] 10.244.0.22:49372 - 55955 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037321s
	[INFO] 10.244.0.22:49372 - 3045 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038654s
	[INFO] 10.244.0.22:35379 - 11365 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000026588s
	[INFO] 10.244.0.22:35379 - 48768 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031431s
	[INFO] 10.244.0.22:49372 - 17488 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059099s
	[INFO] 10.244.0.22:35379 - 4777 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000138344s
	[INFO] 10.244.0.22:49372 - 49111 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096924s
	[INFO] 10.244.0.22:35379 - 64785 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000116758s
	[INFO] 10.244.0.22:35379 - 47703 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060039s
	
	
	==> describe nodes <==
	Name:               addons-413632
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-413632
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc
	                    minikube.k8s.io/name=addons-413632
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_25T21_36_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-413632
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Oct 2024 21:36:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-413632
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Oct 2024 21:44:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Oct 2024 21:41:39 +0000   Fri, 25 Oct 2024 21:36:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Oct 2024 21:41:39 +0000   Fri, 25 Oct 2024 21:36:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Oct 2024 21:41:39 +0000   Fri, 25 Oct 2024 21:36:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 25 Oct 2024 21:41:39 +0000   Fri, 25 Oct 2024 21:36:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.223
	  Hostname:    addons-413632
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a839ce67ffa94184a398d8242d28429c
	  System UUID:                a839ce67-ffa9-4184-a398-d8242d28429c
	  Boot ID:                    482464d4-1bf2-4223-a91e-3e78b95a75f5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  default                     hello-world-app-55bf9c44b4-n7dj7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  kube-system                 amd-gpu-device-plugin-967pw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 coredns-7c65d6cfc9-9tqzw                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m55s
	  kube-system                 etcd-addons-413632                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m
	  kube-system                 kube-apiserver-addons-413632             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-controller-manager-addons-413632    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-proxy-jg272                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-scheduler-addons-413632             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 metrics-server-84c5f94fbc-7drm7          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m49s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m52s  kube-proxy       
	  Normal  Starting                 8m     kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m     kubelet          Node addons-413632 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m     kubelet          Node addons-413632 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m     kubelet          Node addons-413632 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m59s  kubelet          Node addons-413632 status is now: NodeReady
	  Normal  RegisteredNode           7m56s  node-controller  Node addons-413632 event: Registered Node addons-413632 in Controller
	
	
	==> dmesg <==
	[  +5.334409] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.111192] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.135771] kauditd_printk_skb: 118 callbacks suppressed
	[  +5.335463] kauditd_printk_skb: 151 callbacks suppressed
	[  +8.162051] kauditd_printk_skb: 66 callbacks suppressed
	[Oct25 21:37] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.283563] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.547271] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.723041] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.064607] kauditd_printk_skb: 36 callbacks suppressed
	[  +6.776111] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.164873] kauditd_printk_skb: 2 callbacks suppressed
	[Oct25 21:38] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.256100] kauditd_printk_skb: 4 callbacks suppressed
	[ +20.869470] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.353513] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.940086] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.105746] kauditd_printk_skb: 51 callbacks suppressed
	[  +5.042848] kauditd_printk_skb: 25 callbacks suppressed
	[Oct25 21:39] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.589151] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.918248] kauditd_printk_skb: 2 callbacks suppressed
	[Oct25 21:40] kauditd_printk_skb: 7 callbacks suppressed
	[Oct25 21:41] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.254683] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [7255e811190fe10b3f68d0d81ca74a6ac1eb905b3bc947bc5a6d78eb5d3b5ee3] <==
	{"level":"info","ts":"2024-10-25T21:37:43.932885Z","caller":"traceutil/trace.go:171","msg":"trace[1390579874] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1011; }","duration":"259.764467ms","start":"2024-10-25T21:37:43.673112Z","end":"2024-10-25T21:37:43.932876Z","steps":["trace[1390579874] 'agreement among raft nodes before linearized reading'  (duration: 259.668882ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-25T21:37:58.794096Z","caller":"traceutil/trace.go:171","msg":"trace[1625734794] linearizableReadLoop","detail":"{readStateIndex:1124; appliedIndex:1123; }","duration":"343.452974ms","start":"2024-10-25T21:37:58.450627Z","end":"2024-10-25T21:37:58.794080Z","steps":["trace[1625734794] 'read index received'  (duration: 343.278344ms)","trace[1625734794] 'applied index is now lower than readState.Index'  (duration: 173.781µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-25T21:37:58.794364Z","caller":"traceutil/trace.go:171","msg":"trace[1207978474] transaction","detail":"{read_only:false; response_revision:1090; number_of_response:1; }","duration":"439.500973ms","start":"2024-10-25T21:37:58.354854Z","end":"2024-10-25T21:37:58.794355Z","steps":["trace[1207978474] 'process raft request'  (duration: 439.072971ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.794442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"302.42489ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:37:58.794501Z","caller":"traceutil/trace.go:171","msg":"trace[788931765] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1090; }","duration":"302.505472ms","start":"2024-10-25T21:37:58.491987Z","end":"2024-10-25T21:37:58.794493Z","steps":["trace[788931765] 'agreement among raft nodes before linearized reading'  (duration: 302.397336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.794525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-25T21:37:58.491956Z","time spent":"302.563571ms","remote":"127.0.0.1:51868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-25T21:37:58.794702Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.387422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:37:58.794743Z","caller":"traceutil/trace.go:171","msg":"trace[867772007] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1090; }","duration":"121.429317ms","start":"2024-10-25T21:37:58.673307Z","end":"2024-10-25T21:37:58.794737Z","steps":["trace[867772007] 'agreement among raft nodes before linearized reading'  (duration: 121.34698ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.794778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"265.284061ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:37:58.794867Z","caller":"traceutil/trace.go:171","msg":"trace[386388254] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1090; }","duration":"265.377125ms","start":"2024-10-25T21:37:58.529483Z","end":"2024-10-25T21:37:58.794860Z","steps":["trace[386388254] 'agreement among raft nodes before linearized reading'  (duration: 265.267993ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.794477Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-25T21:37:58.354840Z","time spent":"439.571001ms","remote":"127.0.0.1:51846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1089 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-25T21:37:58.794998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"344.365895ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:37:58.795254Z","caller":"traceutil/trace.go:171","msg":"trace[1795563616] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1090; }","duration":"344.622333ms","start":"2024-10-25T21:37:58.450623Z","end":"2024-10-25T21:37:58.795245Z","steps":["trace[1795563616] 'agreement among raft nodes before linearized reading'  (duration: 344.347378ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:37:58.795303Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-25T21:37:58.450538Z","time spent":"344.756037ms","remote":"127.0.0.1:51868","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-25T21:38:08.978112Z","caller":"traceutil/trace.go:171","msg":"trace[1296786523] transaction","detail":"{read_only:false; response_revision:1149; number_of_response:1; }","duration":"130.212149ms","start":"2024-10-25T21:38:08.847880Z","end":"2024-10-25T21:38:08.978092Z","steps":["trace[1296786523] 'process raft request'  (duration: 130.097075ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-25T21:38:34.215899Z","caller":"traceutil/trace.go:171","msg":"trace[567018480] transaction","detail":"{read_only:false; response_revision:1297; number_of_response:1; }","duration":"173.774915ms","start":"2024-10-25T21:38:34.042109Z","end":"2024-10-25T21:38:34.215884Z","steps":["trace[567018480] 'process raft request'  (duration: 173.522387ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-25T21:38:37.374066Z","caller":"traceutil/trace.go:171","msg":"trace[2028133464] linearizableReadLoop","detail":"{readStateIndex:1346; appliedIndex:1345; }","duration":"220.868782ms","start":"2024-10-25T21:38:37.153182Z","end":"2024-10-25T21:38:37.374050Z","steps":["trace[2028133464] 'read index received'  (duration: 220.732368ms)","trace[2028133464] 'applied index is now lower than readState.Index'  (duration: 135.997µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-25T21:38:37.374398Z","caller":"traceutil/trace.go:171","msg":"trace[1921315016] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1302; }","duration":"308.973694ms","start":"2024-10-25T21:38:37.065414Z","end":"2024-10-25T21:38:37.374388Z","steps":["trace[1921315016] 'process raft request'  (duration: 308.53717ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:38:37.374657Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-25T21:38:37.065401Z","time spent":"309.058057ms","remote":"127.0.0.1:52116","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":67,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:855 > success:<request_delete_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > > failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >"}
	{"level":"warn","ts":"2024-10-25T21:38:37.374885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.716854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-10-25T21:38:37.374939Z","caller":"traceutil/trace.go:171","msg":"trace[1144617021] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1302; }","duration":"221.769929ms","start":"2024-10-25T21:38:37.153160Z","end":"2024-10-25T21:38:37.374929Z","steps":["trace[1144617021] 'agreement among raft nodes before linearized reading'  (duration: 221.654793ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:38:37.375158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.440807ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:38:37.375199Z","caller":"traceutil/trace.go:171","msg":"trace[2051312921] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1302; }","duration":"139.483807ms","start":"2024-10-25T21:38:37.235709Z","end":"2024-10-25T21:38:37.375192Z","steps":["trace[2051312921] 'agreement among raft nodes before linearized reading'  (duration: 139.431263ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T21:38:37.375646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.378347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T21:38:37.375742Z","caller":"traceutil/trace.go:171","msg":"trace[657013752] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1302; }","duration":"120.457537ms","start":"2024-10-25T21:38:37.255258Z","end":"2024-10-25T21:38:37.375716Z","steps":["trace[657013752] 'agreement among raft nodes before linearized reading'  (duration: 120.366026ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:44:32 up 8 min,  0 users,  load average: 0.21, 0.65, 0.47
	Linux addons-413632 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae973967b4de24f3c14c03b5a09fa86d39e00a26a742bbc030c4d75534bdc49b] <==
	 > logger="UnhandledError"
	E1025 21:38:29.245797       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.151.10:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.151.10:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1025 21:38:29.280480       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 21:38:29.291769       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1025 21:38:31.176471       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.124.101"}
	I1025 21:38:54.327901       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 21:38:54.507502       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.50.55"}
	I1025 21:38:56.751256       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1025 21:38:57.779658       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1025 21:39:06.208970       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1025 21:39:40.046774       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 21:40:10.134409       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.134493       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:40:10.172449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.172512       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:40:10.204091       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.204154       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:40:10.207473       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.207525       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 21:40:10.226480       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 21:40:10.226534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1025 21:40:11.209336       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1025 21:40:11.226861       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1025 21:40:11.341033       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1025 21:41:16.031017       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.10.223"}
	
	
	==> kube-controller-manager [3fbd91729a37e239201bca24a5a4ff31b4df50fe66b81f0dd91458a5e8c7de60] <==
	E1025 21:42:01.859491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:42:14.488375       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:42:14.488473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:42:47.874382       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:42:47.874538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:42:49.733968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:42:49.734589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:42:51.465008       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:42:51.465106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:43:06.450623       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:43:06.450854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:43:19.977283       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:43:19.977527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:43:40.771988       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:43:40.772180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:43:43.157598       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:43:43.157657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:43:46.303532       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:43:46.303725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:44:09.537908       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:44:09.538088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:44:16.668512       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:44:16.668681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1025 21:44:26.608186       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1025 21:44:26.608297       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [641998da4d5c9fd94ab832ac1848bd8582e04c547295850e6299c928c5b5a2da] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1025 21:36:39.590928       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1025 21:36:39.650217       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.223"]
	E1025 21:36:39.650402       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 21:36:39.970750       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1025 21:36:39.970779       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 21:36:39.970801       1 server_linux.go:169] "Using iptables Proxier"
	I1025 21:36:40.039085       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 21:36:40.048206       1 server.go:483] "Version info" version="v1.31.1"
	I1025 21:36:40.048226       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 21:36:40.056443       1 config.go:199] "Starting service config controller"
	I1025 21:36:40.082764       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1025 21:36:40.082884       1 config.go:105] "Starting endpoint slice config controller"
	I1025 21:36:40.082892       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1025 21:36:40.084062       1 config.go:328] "Starting node config controller"
	I1025 21:36:40.084072       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1025 21:36:40.184065       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1025 21:36:40.184103       1 shared_informer.go:320] Caches are synced for service config
	I1025 21:36:40.184286       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [db634e56fb345d06a2d9229df87f9dc4620c31d59ebe1a6de2f2f9a05d251f57] <==
	E1025 21:36:30.012135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.011930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 21:36:30.012154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.011989       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:36:30.012189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1025 21:36:30.010343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.012775       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 21:36:30.012813       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.833504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 21:36:30.833596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.899934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 21:36:30.900868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.923996       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 21:36:30.924090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.962092       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 21:36:30.962263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:30.997750       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 21:36:30.997803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:31.053119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 21:36:31.053179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:31.077646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 21:36:31.077681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1025 21:36:31.123767       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 21:36:31.123861       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1025 21:36:33.395251       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 21:43:03 addons-413632 kubelet[1209]: E1025 21:43:03.129669    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892583129173612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:13 addons-413632 kubelet[1209]: E1025 21:43:13.134941    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892593134253570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:13 addons-413632 kubelet[1209]: E1025 21:43:13.134984    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892593134253570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:20 addons-413632 kubelet[1209]: I1025 21:43:20.599065    1209 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-967pw" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 21:43:23 addons-413632 kubelet[1209]: E1025 21:43:23.138012    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892603137667689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:23 addons-413632 kubelet[1209]: E1025 21:43:23.138061    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892603137667689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:32 addons-413632 kubelet[1209]: E1025 21:43:32.620058    1209 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 25 21:43:32 addons-413632 kubelet[1209]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 25 21:43:32 addons-413632 kubelet[1209]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 25 21:43:32 addons-413632 kubelet[1209]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 25 21:43:32 addons-413632 kubelet[1209]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 25 21:43:33 addons-413632 kubelet[1209]: E1025 21:43:33.141032    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892613140057957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:33 addons-413632 kubelet[1209]: E1025 21:43:33.141057    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892613140057957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:43 addons-413632 kubelet[1209]: E1025 21:43:43.143837    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892623143245290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:43 addons-413632 kubelet[1209]: E1025 21:43:43.144169    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892623143245290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:53 addons-413632 kubelet[1209]: E1025 21:43:53.147789    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892633147095685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:43:53 addons-413632 kubelet[1209]: E1025 21:43:53.148077    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892633147095685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:44:03 addons-413632 kubelet[1209]: E1025 21:44:03.151222    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892643150761789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:44:03 addons-413632 kubelet[1209]: E1025 21:44:03.151267    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892643150761789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:44:13 addons-413632 kubelet[1209]: E1025 21:44:13.156653    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892653155731419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:44:13 addons-413632 kubelet[1209]: E1025 21:44:13.156690    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892653155731419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:44:21 addons-413632 kubelet[1209]: I1025 21:44:21.599530    1209 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 21:44:23 addons-413632 kubelet[1209]: E1025 21:44:23.159299    1209 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892663158874782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:44:23 addons-413632 kubelet[1209]: E1025 21:44:23.159634    1209 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729892663158874782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596169,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 25 21:44:29 addons-413632 kubelet[1209]: I1025 21:44:29.598662    1209 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-967pw" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [2337a9243bcac04881850da120ebdfa6c2851c259599356474081ac3b2158e43] <==
	I1025 21:36:46.264394       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 21:36:46.307767       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 21:36:46.307964       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 21:36:46.329987       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 21:36:46.330159       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-413632_cc3b9645-f160-4b7d-8628-c6fbb3b1c134!
	I1025 21:36:46.333321       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cafad7de-61e1-438f-87d5-43ad3584c8ce", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-413632_cc3b9645-f160-4b7d-8628-c6fbb3b1c134 became leader
	I1025 21:36:46.434679       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-413632_cc3b9645-f160-4b7d-8628-c6fbb3b1c134!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-413632 -n addons-413632
helpers_test.go:261: (dbg) Run:  kubectl --context addons-413632 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (363.20s)

                                                
                                    
x
+
TestPreload (171.84s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-416177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1025 22:33:06.946087  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-416177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m27.188875599s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-416177 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-416177 image pull gcr.io/k8s-minikube/busybox: (3.470735386s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-416177
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-416177: (7.292641401s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-416177 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1025 22:34:40.902322  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-416177 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.833883729s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-416177 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-25 22:35:08.055444822 +0000 UTC m=+3598.176604332
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-416177 -n test-preload-416177
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-416177 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-416177 logs -n 25: (1.051078437s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-511849 ssh -n                                                                 | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:20 UTC | 25 Oct 24 22:20 UTC |
	|         | multinode-511849-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-511849 ssh -n multinode-511849 sudo cat                                       | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:20 UTC | 25 Oct 24 22:20 UTC |
	|         | /home/docker/cp-test_multinode-511849-m03_multinode-511849.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-511849 cp multinode-511849-m03:/home/docker/cp-test.txt                       | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:20 UTC | 25 Oct 24 22:20 UTC |
	|         | multinode-511849-m02:/home/docker/cp-test_multinode-511849-m03_multinode-511849-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-511849 ssh -n                                                                 | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:20 UTC | 25 Oct 24 22:20 UTC |
	|         | multinode-511849-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-511849 ssh -n multinode-511849-m02 sudo cat                                   | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:20 UTC | 25 Oct 24 22:20 UTC |
	|         | /home/docker/cp-test_multinode-511849-m03_multinode-511849-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-511849 node stop m03                                                          | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:20 UTC | 25 Oct 24 22:20 UTC |
	| node    | multinode-511849 node start                                                             | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:20 UTC | 25 Oct 24 22:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-511849                                                                | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:20 UTC |                     |
	| stop    | -p multinode-511849                                                                     | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:20 UTC | 25 Oct 24 22:23 UTC |
	| start   | -p multinode-511849                                                                     | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:23 UTC | 25 Oct 24 22:26 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-511849                                                                | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:26 UTC |                     |
	| node    | multinode-511849 node delete                                                            | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:26 UTC | 25 Oct 24 22:26 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-511849 stop                                                                   | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:26 UTC | 25 Oct 24 22:29 UTC |
	| start   | -p multinode-511849                                                                     | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:29 UTC | 25 Oct 24 22:31 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-511849                                                                | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:31 UTC |                     |
	| start   | -p multinode-511849-m02                                                                 | multinode-511849-m02 | jenkins | v1.34.0 | 25 Oct 24 22:31 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-511849-m03                                                                 | multinode-511849-m03 | jenkins | v1.34.0 | 25 Oct 24 22:31 UTC | 25 Oct 24 22:32 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-511849                                                                 | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:32 UTC |                     |
	| delete  | -p multinode-511849-m03                                                                 | multinode-511849-m03 | jenkins | v1.34.0 | 25 Oct 24 22:32 UTC | 25 Oct 24 22:32 UTC |
	| delete  | -p multinode-511849                                                                     | multinode-511849     | jenkins | v1.34.0 | 25 Oct 24 22:32 UTC | 25 Oct 24 22:32 UTC |
	| start   | -p test-preload-416177                                                                  | test-preload-416177  | jenkins | v1.34.0 | 25 Oct 24 22:32 UTC | 25 Oct 24 22:33 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-416177 image pull                                                          | test-preload-416177  | jenkins | v1.34.0 | 25 Oct 24 22:33 UTC | 25 Oct 24 22:33 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-416177                                                                  | test-preload-416177  | jenkins | v1.34.0 | 25 Oct 24 22:33 UTC | 25 Oct 24 22:33 UTC |
	| start   | -p test-preload-416177                                                                  | test-preload-416177  | jenkins | v1.34.0 | 25 Oct 24 22:33 UTC | 25 Oct 24 22:35 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-416177 image list                                                          | test-preload-416177  | jenkins | v1.34.0 | 25 Oct 24 22:35 UTC | 25 Oct 24 22:35 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 22:33:57
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 22:33:57.050353  701163 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:33:57.050468  701163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:33:57.050480  701163 out.go:358] Setting ErrFile to fd 2...
	I1025 22:33:57.050485  701163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:33:57.050701  701163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:33:57.051315  701163 out.go:352] Setting JSON to false
	I1025 22:33:57.052361  701163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":18981,"bootTime":1729876656,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:33:57.052458  701163 start.go:139] virtualization: kvm guest
	I1025 22:33:57.054948  701163 out.go:177] * [test-preload-416177] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:33:57.056320  701163 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 22:33:57.056325  701163 notify.go:220] Checking for updates...
	I1025 22:33:57.058909  701163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:33:57.060215  701163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:33:57.061612  701163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:33:57.063099  701163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:33:57.064490  701163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:33:57.066311  701163 config.go:182] Loaded profile config "test-preload-416177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1025 22:33:57.067015  701163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:33:57.067079  701163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:33:57.082735  701163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I1025 22:33:57.083177  701163 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:33:57.083698  701163 main.go:141] libmachine: Using API Version  1
	I1025 22:33:57.083719  701163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:33:57.084069  701163 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:33:57.084267  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:33:57.086050  701163 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1025 22:33:57.087439  701163 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 22:33:57.087736  701163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:33:57.087779  701163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:33:57.102467  701163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I1025 22:33:57.102926  701163 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:33:57.103432  701163 main.go:141] libmachine: Using API Version  1
	I1025 22:33:57.103454  701163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:33:57.103724  701163 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:33:57.103908  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:33:57.137668  701163 out.go:177] * Using the kvm2 driver based on existing profile
	I1025 22:33:57.138974  701163 start.go:297] selected driver: kvm2
	I1025 22:33:57.138986  701163 start.go:901] validating driver "kvm2" against &{Name:test-preload-416177 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-416177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:33:57.139100  701163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:33:57.140028  701163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:33:57.140112  701163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 22:33:57.154714  701163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 22:33:57.155048  701163 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 22:33:57.155080  701163 cni.go:84] Creating CNI manager for ""
	I1025 22:33:57.155128  701163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:33:57.155177  701163 start.go:340] cluster config:
	{Name:test-preload-416177 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-416177 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:33:57.155282  701163 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:33:57.156920  701163 out.go:177] * Starting "test-preload-416177" primary control-plane node in "test-preload-416177" cluster
	I1025 22:33:57.158137  701163 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1025 22:33:57.286463  701163 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1025 22:33:57.286500  701163 cache.go:56] Caching tarball of preloaded images
	I1025 22:33:57.286694  701163 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1025 22:33:57.288419  701163 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1025 22:33:57.289686  701163 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1025 22:33:57.888680  701163 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1025 22:34:10.363997  701163 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1025 22:34:10.364098  701163 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1025 22:34:11.226713  701163 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1025 22:34:11.226861  701163 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/config.json ...
	I1025 22:34:11.227097  701163 start.go:360] acquireMachinesLock for test-preload-416177: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 22:34:11.227166  701163 start.go:364] duration metric: took 46.793µs to acquireMachinesLock for "test-preload-416177"
	I1025 22:34:11.227182  701163 start.go:96] Skipping create...Using existing machine configuration
	I1025 22:34:11.227190  701163 fix.go:54] fixHost starting: 
	I1025 22:34:11.227482  701163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:34:11.227531  701163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:34:11.242412  701163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I1025 22:34:11.242909  701163 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:34:11.243342  701163 main.go:141] libmachine: Using API Version  1
	I1025 22:34:11.243374  701163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:34:11.243744  701163 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:34:11.243927  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:34:11.244062  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetState
	I1025 22:34:11.245555  701163 fix.go:112] recreateIfNeeded on test-preload-416177: state=Stopped err=<nil>
	I1025 22:34:11.245579  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	W1025 22:34:11.245724  701163 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 22:34:11.247909  701163 out.go:177] * Restarting existing kvm2 VM for "test-preload-416177" ...
	I1025 22:34:11.249117  701163 main.go:141] libmachine: (test-preload-416177) Calling .Start
	I1025 22:34:11.249272  701163 main.go:141] libmachine: (test-preload-416177) starting domain...
	I1025 22:34:11.249291  701163 main.go:141] libmachine: (test-preload-416177) ensuring networks are active...
	I1025 22:34:11.250029  701163 main.go:141] libmachine: (test-preload-416177) Ensuring network default is active
	I1025 22:34:11.250448  701163 main.go:141] libmachine: (test-preload-416177) Ensuring network mk-test-preload-416177 is active
	I1025 22:34:11.250795  701163 main.go:141] libmachine: (test-preload-416177) getting domain XML...
	I1025 22:34:11.251399  701163 main.go:141] libmachine: (test-preload-416177) creating domain...
	I1025 22:34:12.443609  701163 main.go:141] libmachine: (test-preload-416177) waiting for IP...
	I1025 22:34:12.444499  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:12.444867  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:12.445009  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:12.444860  701248 retry.go:31] will retry after 228.183177ms: waiting for domain to come up
	I1025 22:34:12.674343  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:12.674823  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:12.674857  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:12.674779  701248 retry.go:31] will retry after 362.066408ms: waiting for domain to come up
	I1025 22:34:13.038127  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:13.038541  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:13.038566  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:13.038511  701248 retry.go:31] will retry after 446.649968ms: waiting for domain to come up
	I1025 22:34:13.487340  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:13.487728  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:13.487778  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:13.487718  701248 retry.go:31] will retry after 452.72328ms: waiting for domain to come up
	I1025 22:34:13.942333  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:13.942760  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:13.942784  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:13.942716  701248 retry.go:31] will retry after 705.18044ms: waiting for domain to come up
	I1025 22:34:14.649301  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:14.649655  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:14.649680  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:14.649626  701248 retry.go:31] will retry after 788.924129ms: waiting for domain to come up
	I1025 22:34:15.440855  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:15.441257  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:15.441308  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:15.441217  701248 retry.go:31] will retry after 1.124422914s: waiting for domain to come up
	I1025 22:34:16.566897  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:16.567292  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:16.567322  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:16.567254  701248 retry.go:31] will retry after 1.31352447s: waiting for domain to come up
	I1025 22:34:17.882960  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:17.883316  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:17.883370  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:17.883307  701248 retry.go:31] will retry after 1.681290591s: waiting for domain to come up
	I1025 22:34:19.567199  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:19.567638  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:19.567683  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:19.567602  701248 retry.go:31] will retry after 1.888063977s: waiting for domain to come up
	I1025 22:34:21.457232  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:21.457729  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:21.457761  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:21.457695  701248 retry.go:31] will retry after 1.758103582s: waiting for domain to come up
	I1025 22:34:23.218638  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:23.219049  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:23.219077  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:23.219020  701248 retry.go:31] will retry after 3.537421985s: waiting for domain to come up
	I1025 22:34:26.757574  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:26.758096  701163 main.go:141] libmachine: (test-preload-416177) DBG | unable to find current IP address of domain test-preload-416177 in network mk-test-preload-416177
	I1025 22:34:26.758117  701163 main.go:141] libmachine: (test-preload-416177) DBG | I1025 22:34:26.758043  701248 retry.go:31] will retry after 2.83141101s: waiting for domain to come up
	I1025 22:34:29.593195  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.593654  701163 main.go:141] libmachine: (test-preload-416177) found domain IP: 192.168.39.136
	I1025 22:34:29.593681  701163 main.go:141] libmachine: (test-preload-416177) reserving static IP address...
	I1025 22:34:29.593702  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has current primary IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.594108  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "test-preload-416177", mac: "52:54:00:42:49:9b", ip: "192.168.39.136"} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:29.594153  701163 main.go:141] libmachine: (test-preload-416177) DBG | skip adding static IP to network mk-test-preload-416177 - found existing host DHCP lease matching {name: "test-preload-416177", mac: "52:54:00:42:49:9b", ip: "192.168.39.136"}
	I1025 22:34:29.594168  701163 main.go:141] libmachine: (test-preload-416177) reserved static IP address 192.168.39.136 for domain test-preload-416177
	I1025 22:34:29.594191  701163 main.go:141] libmachine: (test-preload-416177) waiting for SSH...
	I1025 22:34:29.594204  701163 main.go:141] libmachine: (test-preload-416177) DBG | Getting to WaitForSSH function...
	I1025 22:34:29.596212  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.596552  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:29.596583  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.596714  701163 main.go:141] libmachine: (test-preload-416177) DBG | Using SSH client type: external
	I1025 22:34:29.596743  701163 main.go:141] libmachine: (test-preload-416177) DBG | Using SSH private key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/test-preload-416177/id_rsa (-rw-------)
	I1025 22:34:29.596779  701163 main.go:141] libmachine: (test-preload-416177) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.136 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19758-661979/.minikube/machines/test-preload-416177/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 22:34:29.596792  701163 main.go:141] libmachine: (test-preload-416177) DBG | About to run SSH command:
	I1025 22:34:29.596805  701163 main.go:141] libmachine: (test-preload-416177) DBG | exit 0
	I1025 22:34:29.716883  701163 main.go:141] libmachine: (test-preload-416177) DBG | SSH cmd err, output: <nil>: 
	I1025 22:34:29.717243  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetConfigRaw
	I1025 22:34:29.717936  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetIP
	I1025 22:34:29.720340  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.720628  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:29.720660  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.720865  701163 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/config.json ...
	I1025 22:34:29.721108  701163 machine.go:93] provisionDockerMachine start ...
	I1025 22:34:29.721131  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:34:29.721397  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:29.723560  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.723903  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:29.723932  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.724050  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:29.724210  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:29.724337  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:29.724433  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:29.724610  701163 main.go:141] libmachine: Using SSH client type: native
	I1025 22:34:29.724832  701163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1025 22:34:29.724847  701163 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 22:34:29.825379  701163 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 22:34:29.825421  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetMachineName
	I1025 22:34:29.825707  701163 buildroot.go:166] provisioning hostname "test-preload-416177"
	I1025 22:34:29.825735  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetMachineName
	I1025 22:34:29.825882  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:29.828318  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.828632  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:29.828675  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.828747  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:29.828965  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:29.829097  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:29.829234  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:29.829369  701163 main.go:141] libmachine: Using SSH client type: native
	I1025 22:34:29.829542  701163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1025 22:34:29.829554  701163 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-416177 && echo "test-preload-416177" | sudo tee /etc/hostname
	I1025 22:34:29.939185  701163 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-416177
	
	I1025 22:34:29.939220  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:29.941940  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.942304  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:29.942329  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:29.942598  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:29.942824  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:29.943018  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:29.943181  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:29.943474  701163 main.go:141] libmachine: Using SSH client type: native
	I1025 22:34:29.943663  701163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1025 22:34:29.943688  701163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-416177' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-416177/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-416177' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 22:34:30.049955  701163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:34:30.049987  701163 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19758-661979/.minikube CaCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19758-661979/.minikube}
	I1025 22:34:30.050036  701163 buildroot.go:174] setting up certificates
	I1025 22:34:30.050051  701163 provision.go:84] configureAuth start
	I1025 22:34:30.050067  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetMachineName
	I1025 22:34:30.050374  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetIP
	I1025 22:34:30.052948  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.053302  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:30.053333  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.053425  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:30.055429  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.055797  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:30.055831  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.055963  701163 provision.go:143] copyHostCerts
	I1025 22:34:30.056040  701163 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem, removing ...
	I1025 22:34:30.056064  701163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem
	I1025 22:34:30.056129  701163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem (1123 bytes)
	I1025 22:34:30.056226  701163 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem, removing ...
	I1025 22:34:30.056234  701163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem
	I1025 22:34:30.056258  701163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem (1675 bytes)
	I1025 22:34:30.056382  701163 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem, removing ...
	I1025 22:34:30.056392  701163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem
	I1025 22:34:30.056417  701163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem (1082 bytes)
	I1025 22:34:30.056499  701163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem org=jenkins.test-preload-416177 san=[127.0.0.1 192.168.39.136 localhost minikube test-preload-416177]
	I1025 22:34:30.302551  701163 provision.go:177] copyRemoteCerts
	I1025 22:34:30.302622  701163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 22:34:30.302649  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:30.305469  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.305820  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:30.305849  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.305974  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:30.306186  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:30.306362  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:30.306512  701163 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/test-preload-416177/id_rsa Username:docker}
	I1025 22:34:30.387852  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 22:34:30.413938  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 22:34:30.437105  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 22:34:30.460051  701163 provision.go:87] duration metric: took 409.985061ms to configureAuth
	I1025 22:34:30.460084  701163 buildroot.go:189] setting minikube options for container-runtime
	I1025 22:34:30.460294  701163 config.go:182] Loaded profile config "test-preload-416177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1025 22:34:30.460398  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:30.463304  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.463714  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:30.463747  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.463938  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:30.464131  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:30.464274  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:30.464378  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:30.464577  701163 main.go:141] libmachine: Using SSH client type: native
	I1025 22:34:30.464780  701163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1025 22:34:30.464801  701163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 22:34:30.699976  701163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 22:34:30.700010  701163 machine.go:96] duration metric: took 978.886218ms to provisionDockerMachine
	I1025 22:34:30.700023  701163 start.go:293] postStartSetup for "test-preload-416177" (driver="kvm2")
	I1025 22:34:30.700036  701163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 22:34:30.700059  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:34:30.700373  701163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 22:34:30.700421  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:30.703140  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.703487  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:30.703525  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.703656  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:30.703851  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:30.704030  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:30.704165  701163 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/test-preload-416177/id_rsa Username:docker}
	I1025 22:34:30.783894  701163 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 22:34:30.788249  701163 info.go:137] Remote host: Buildroot 2023.02.9
	I1025 22:34:30.788270  701163 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/addons for local assets ...
	I1025 22:34:30.788332  701163 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/files for local assets ...
	I1025 22:34:30.788420  701163 filesync.go:149] local asset: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem -> 6691772.pem in /etc/ssl/certs
	I1025 22:34:30.788509  701163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 22:34:30.798166  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:34:30.825022  701163 start.go:296] duration metric: took 124.98332ms for postStartSetup
	I1025 22:34:30.825066  701163 fix.go:56] duration metric: took 19.597876485s for fixHost
	I1025 22:34:30.825088  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:30.827643  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.827971  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:30.828010  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.828132  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:30.828343  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:30.828535  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:30.828702  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:30.828849  701163 main.go:141] libmachine: Using SSH client type: native
	I1025 22:34:30.829059  701163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.136 22 <nil> <nil>}
	I1025 22:34:30.829070  701163 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 22:34:30.929996  701163 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729895670.892924552
	
	I1025 22:34:30.930020  701163 fix.go:216] guest clock: 1729895670.892924552
	I1025 22:34:30.930028  701163 fix.go:229] Guest: 2024-10-25 22:34:30.892924552 +0000 UTC Remote: 2024-10-25 22:34:30.825070887 +0000 UTC m=+33.813765318 (delta=67.853665ms)
	I1025 22:34:30.930048  701163 fix.go:200] guest clock delta is within tolerance: 67.853665ms
	I1025 22:34:30.930053  701163 start.go:83] releasing machines lock for "test-preload-416177", held for 19.702878153s
	I1025 22:34:30.930073  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:34:30.930353  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetIP
	I1025 22:34:30.932968  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.933304  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:30.933353  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.933472  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:34:30.933923  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:34:30.934086  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:34:30.934195  701163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 22:34:30.934239  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:30.934277  701163 ssh_runner.go:195] Run: cat /version.json
	I1025 22:34:30.934301  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:30.936695  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.936978  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:30.937010  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.937149  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.937152  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:30.937389  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:30.937547  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:30.937559  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:30.937566  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:30.937703  701163 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/test-preload-416177/id_rsa Username:docker}
	I1025 22:34:30.937751  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:30.937899  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:30.938056  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:30.938195  701163 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/test-preload-416177/id_rsa Username:docker}
	I1025 22:34:31.036055  701163 ssh_runner.go:195] Run: systemctl --version
	I1025 22:34:31.041921  701163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 22:34:31.186380  701163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 22:34:31.193397  701163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 22:34:31.193464  701163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 22:34:31.210712  701163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 22:34:31.210741  701163 start.go:495] detecting cgroup driver to use...
	I1025 22:34:31.210831  701163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 22:34:31.226139  701163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 22:34:31.240005  701163 docker.go:217] disabling cri-docker service (if available) ...
	I1025 22:34:31.240063  701163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 22:34:31.253700  701163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 22:34:31.266977  701163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 22:34:31.384333  701163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 22:34:31.522701  701163 docker.go:233] disabling docker service ...
	I1025 22:34:31.522792  701163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 22:34:31.536337  701163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 22:34:31.549101  701163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 22:34:31.687888  701163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 22:34:31.805761  701163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 22:34:31.820061  701163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:34:31.838483  701163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1025 22:34:31.838543  701163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:34:31.848436  701163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 22:34:31.848502  701163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:34:31.858599  701163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:34:31.868772  701163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:34:31.878814  701163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 22:34:31.889003  701163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:34:31.898919  701163 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:34:31.916299  701163 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:34:31.926216  701163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 22:34:31.935248  701163 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 22:34:31.935314  701163 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 22:34:31.948022  701163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 22:34:31.961404  701163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:34:32.083934  701163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 22:34:32.177384  701163 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 22:34:32.177459  701163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 22:34:32.182657  701163 start.go:563] Will wait 60s for crictl version
	I1025 22:34:32.182713  701163 ssh_runner.go:195] Run: which crictl
	I1025 22:34:32.186429  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 22:34:32.224209  701163 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 22:34:32.224291  701163 ssh_runner.go:195] Run: crio --version
	I1025 22:34:32.251671  701163 ssh_runner.go:195] Run: crio --version
	I1025 22:34:32.281414  701163 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1025 22:34:32.283075  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetIP
	I1025 22:34:32.285583  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:32.285942  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:32.285973  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:32.286213  701163 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 22:34:32.290217  701163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:34:32.302822  701163 kubeadm.go:883] updating cluster {Name:test-preload-416177 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-416177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 22:34:32.302953  701163 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1025 22:34:32.303013  701163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:34:32.338720  701163 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1025 22:34:32.338786  701163 ssh_runner.go:195] Run: which lz4
	I1025 22:34:32.342793  701163 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 22:34:32.346854  701163 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 22:34:32.346882  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1025 22:34:33.855890  701163 crio.go:462] duration metric: took 1.513129877s to copy over tarball
	I1025 22:34:33.855974  701163 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 22:34:36.222029  701163 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.366025345s)
	I1025 22:34:36.222059  701163 crio.go:469] duration metric: took 2.36613577s to extract the tarball
	I1025 22:34:36.222067  701163 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 22:34:36.263658  701163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:34:36.306916  701163 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1025 22:34:36.306946  701163 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 22:34:36.307003  701163 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:34:36.307029  701163 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1025 22:34:36.307048  701163 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1025 22:34:36.307064  701163 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 22:34:36.307099  701163 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 22:34:36.307112  701163 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1025 22:34:36.307036  701163 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 22:34:36.307087  701163 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1025 22:34:36.308426  701163 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1025 22:34:36.308446  701163 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1025 22:34:36.308527  701163 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 22:34:36.308538  701163 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 22:34:36.308543  701163 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:34:36.308538  701163 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1025 22:34:36.308568  701163 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 22:34:36.308702  701163 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 22:34:36.462755  701163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1025 22:34:36.490963  701163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1025 22:34:36.500395  701163 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1025 22:34:36.500448  701163 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 22:34:36.500490  701163 ssh_runner.go:195] Run: which crictl
	I1025 22:34:36.505458  701163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1025 22:34:36.511709  701163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1025 22:34:36.548515  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1025 22:34:36.548714  701163 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1025 22:34:36.548752  701163 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1025 22:34:36.548807  701163 ssh_runner.go:195] Run: which crictl
	I1025 22:34:36.575176  701163 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1025 22:34:36.575212  701163 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1025 22:34:36.575249  701163 ssh_runner.go:195] Run: which crictl
	I1025 22:34:36.583386  701163 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1025 22:34:36.583427  701163 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1025 22:34:36.583474  701163 ssh_runner.go:195] Run: which crictl
	I1025 22:34:36.589420  701163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 22:34:36.600040  701163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1025 22:34:36.606767  701163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1025 22:34:36.623065  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1025 22:34:36.623094  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1025 22:34:36.623132  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1025 22:34:36.623223  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1025 22:34:36.689588  701163 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1025 22:34:36.689637  701163 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 22:34:36.689682  701163 ssh_runner.go:195] Run: which crictl
	I1025 22:34:36.773832  701163 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1025 22:34:36.773890  701163 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1025 22:34:36.773907  701163 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1025 22:34:36.773944  701163 ssh_runner.go:195] Run: which crictl
	I1025 22:34:36.773945  701163 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1025 22:34:36.773991  701163 ssh_runner.go:195] Run: which crictl
	I1025 22:34:36.774008  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1025 22:34:36.774112  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1025 22:34:36.779550  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1025 22:34:36.779628  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1025 22:34:36.779669  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 22:34:36.867334  701163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 22:34:36.867448  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1025 22:34:36.867462  701163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1025 22:34:36.867478  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1025 22:34:36.867516  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1025 22:34:36.877646  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1025 22:34:36.890693  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1025 22:34:36.890736  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 22:34:37.009105  701163 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1025 22:34:37.009137  701163 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1025 22:34:37.009194  701163 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1025 22:34:37.009258  701163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1025 22:34:37.009328  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1025 22:34:37.009362  701163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1025 22:34:37.009421  701163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1025 22:34:37.009492  701163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1025 22:34:37.009548  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1025 22:34:37.022216  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 22:34:37.022262  701163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1025 22:34:37.022337  701163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1025 22:34:37.458951  701163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:34:40.129739  701163 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (3.120510192s)
	I1025 22:34:40.129784  701163 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1025 22:34:40.129829  701163 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (3.120483202s)
	I1025 22:34:40.129888  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1025 22:34:40.129966  701163 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.12057056s)
	I1025 22:34:40.129998  701163 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1025 22:34:40.130007  701163 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1025 22:34:40.130005  701163 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.120494522s)
	I1025 22:34:40.130027  701163 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1025 22:34:40.130051  701163 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1025 22:34:40.130069  701163 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (3.120498556s)
	I1025 22:34:40.130117  701163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1025 22:34:40.130123  701163 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (3.107886004s)
	I1025 22:34:40.130153  701163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1025 22:34:40.130168  701163 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (3.10781456s)
	I1025 22:34:40.130185  701163 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1025 22:34:40.130210  701163 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.671232183s)
	I1025 22:34:40.130252  701163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1025 22:34:40.195456  701163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1025 22:34:40.195631  701163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1025 22:34:40.618569  701163 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1025 22:34:40.618628  701163 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1025 22:34:40.618689  701163 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1025 22:34:40.618693  701163 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1025 22:34:40.618728  701163 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1025 22:34:40.618691  701163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1025 22:34:40.618826  701163 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1025 22:34:41.466567  701163 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1025 22:34:41.466637  701163 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1025 22:34:41.466682  701163 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1025 22:34:41.466783  701163 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1025 22:34:43.619508  701163 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.152696667s)
	I1025 22:34:43.619550  701163 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1025 22:34:43.619584  701163 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1025 22:34:43.619642  701163 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1025 22:34:44.362939  701163 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1025 22:34:44.362997  701163 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1025 22:34:44.363053  701163 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1025 22:34:45.110561  701163 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1025 22:34:45.110617  701163 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1025 22:34:45.110680  701163 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1025 22:34:45.252360  701163 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1025 22:34:45.252422  701163 cache_images.go:123] Successfully loaded all cached images
	I1025 22:34:45.252430  701163 cache_images.go:92] duration metric: took 8.945471555s to LoadCachedImages
	I1025 22:34:45.252447  701163 kubeadm.go:934] updating node { 192.168.39.136 8443 v1.24.4 crio true true} ...
	I1025 22:34:45.252596  701163 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-416177 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-416177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 22:34:45.252729  701163 ssh_runner.go:195] Run: crio config
	I1025 22:34:45.302989  701163 cni.go:84] Creating CNI manager for ""
	I1025 22:34:45.303018  701163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:34:45.303031  701163 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 22:34:45.303056  701163 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.136 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-416177 NodeName:test-preload-416177 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 22:34:45.303217  701163 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-416177"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 22:34:45.303294  701163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1025 22:34:45.313055  701163 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 22:34:45.313132  701163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 22:34:45.322408  701163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1025 22:34:45.339084  701163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 22:34:45.355837  701163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1025 22:34:45.373070  701163 ssh_runner.go:195] Run: grep 192.168.39.136	control-plane.minikube.internal$ /etc/hosts
	I1025 22:34:45.377214  701163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:34:45.388989  701163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:34:45.509697  701163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:34:45.528171  701163 certs.go:68] Setting up /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177 for IP: 192.168.39.136
	I1025 22:34:45.528198  701163 certs.go:194] generating shared ca certs ...
	I1025 22:34:45.528217  701163 certs.go:226] acquiring lock for ca certs: {Name:mkb3c0b02dcf77d22d8e250175d24731a46db4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:34:45.528455  701163 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key
	I1025 22:34:45.528508  701163 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key
	I1025 22:34:45.528525  701163 certs.go:256] generating profile certs ...
	I1025 22:34:45.528651  701163 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/client.key
	I1025 22:34:45.528731  701163 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/apiserver.key.fbb04ba7
	I1025 22:34:45.528781  701163 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/proxy-client.key
	I1025 22:34:45.528947  701163 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem (1338 bytes)
	W1025 22:34:45.529007  701163 certs.go:480] ignoring /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177_empty.pem, impossibly tiny 0 bytes
	I1025 22:34:45.529022  701163 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 22:34:45.529052  701163 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem (1082 bytes)
	I1025 22:34:45.529081  701163 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem (1123 bytes)
	I1025 22:34:45.529112  701163 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem (1675 bytes)
	I1025 22:34:45.529170  701163 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:34:45.529994  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 22:34:45.576862  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 22:34:45.616890  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 22:34:45.653841  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 22:34:45.699468  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 22:34:45.731624  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 22:34:45.764303  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 22:34:45.788727  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 22:34:45.812681  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /usr/share/ca-certificates/6691772.pem (1708 bytes)
	I1025 22:34:45.836076  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 22:34:45.859543  701163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem --> /usr/share/ca-certificates/669177.pem (1338 bytes)
	I1025 22:34:45.882882  701163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 22:34:45.899326  701163 ssh_runner.go:195] Run: openssl version
	I1025 22:34:45.905422  701163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6691772.pem && ln -fs /usr/share/ca-certificates/6691772.pem /etc/ssl/certs/6691772.pem"
	I1025 22:34:45.916065  701163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6691772.pem
	I1025 22:34:45.920851  701163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:46 /usr/share/ca-certificates/6691772.pem
	I1025 22:34:45.920914  701163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6691772.pem
	I1025 22:34:45.926842  701163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6691772.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 22:34:45.937373  701163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 22:34:45.947711  701163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:34:45.952178  701163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:34:45.952242  701163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:34:45.957749  701163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 22:34:45.968244  701163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669177.pem && ln -fs /usr/share/ca-certificates/669177.pem /etc/ssl/certs/669177.pem"
	I1025 22:34:45.978545  701163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669177.pem
	I1025 22:34:45.983187  701163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:46 /usr/share/ca-certificates/669177.pem
	I1025 22:34:45.983260  701163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669177.pem
	I1025 22:34:45.988949  701163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669177.pem /etc/ssl/certs/51391683.0"
	I1025 22:34:45.999670  701163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 22:34:46.004529  701163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 22:34:46.010723  701163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 22:34:46.016438  701163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 22:34:46.022118  701163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 22:34:46.027856  701163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 22:34:46.033551  701163 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 22:34:46.039235  701163 kubeadm.go:392] StartCluster: {Name:test-preload-416177 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-416177 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:34:46.039350  701163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 22:34:46.039396  701163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:34:46.076874  701163 cri.go:89] found id: ""
	I1025 22:34:46.076982  701163 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 22:34:46.087118  701163 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 22:34:46.087141  701163 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 22:34:46.087195  701163 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 22:34:46.096525  701163 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:34:46.097030  701163 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-416177" does not appear in /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:34:46.097161  701163 kubeconfig.go:62] /home/jenkins/minikube-integration/19758-661979/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-416177" cluster setting kubeconfig missing "test-preload-416177" context setting]
	I1025 22:34:46.097480  701163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:34:46.098096  701163 kapi.go:59] client config for test-preload-416177: &rest.Config{Host:"https://192.168.39.136:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/client.crt", KeyFile:"/home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/client.key", CAFile:"/home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 22:34:46.098790  701163 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 22:34:46.108264  701163 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.136
	I1025 22:34:46.108302  701163 kubeadm.go:1160] stopping kube-system containers ...
	I1025 22:34:46.108365  701163 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 22:34:46.108432  701163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:34:46.145415  701163 cri.go:89] found id: ""
	I1025 22:34:46.145548  701163 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 22:34:46.161869  701163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:34:46.171827  701163 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:34:46.171852  701163 kubeadm.go:157] found existing configuration files:
	
	I1025 22:34:46.171896  701163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:34:46.180742  701163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:34:46.180807  701163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:34:46.189965  701163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:34:46.198778  701163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:34:46.198842  701163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:34:46.208013  701163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:34:46.216854  701163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:34:46.216935  701163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:34:46.226056  701163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:34:46.234861  701163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:34:46.234923  701163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:34:46.244036  701163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:34:46.253415  701163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:34:46.341730  701163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:34:47.013926  701163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:34:47.274781  701163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:34:47.350095  701163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:34:47.449045  701163 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:34:47.449143  701163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:34:47.950165  701163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:34:48.450151  701163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:34:48.502027  701163 api_server.go:72] duration metric: took 1.052981615s to wait for apiserver process to appear ...
	I1025 22:34:48.502061  701163 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:34:48.502093  701163 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I1025 22:34:48.502637  701163 api_server.go:269] stopped: https://192.168.39.136:8443/healthz: Get "https://192.168.39.136:8443/healthz": dial tcp 192.168.39.136:8443: connect: connection refused
	I1025 22:34:49.002159  701163 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I1025 22:34:49.002810  701163 api_server.go:269] stopped: https://192.168.39.136:8443/healthz: Get "https://192.168.39.136:8443/healthz": dial tcp 192.168.39.136:8443: connect: connection refused
	I1025 22:34:49.502331  701163 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I1025 22:34:52.775200  701163 api_server.go:279] https://192.168.39.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 22:34:52.775237  701163 api_server.go:103] status: https://192.168.39.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 22:34:52.775256  701163 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I1025 22:34:52.819308  701163 api_server.go:279] https://192.168.39.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 22:34:52.819347  701163 api_server.go:103] status: https://192.168.39.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 22:34:53.002672  701163 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I1025 22:34:53.008596  701163 api_server.go:279] https://192.168.39.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:34:53.008630  701163 api_server.go:103] status: https://192.168.39.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:34:53.502298  701163 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I1025 22:34:53.507512  701163 api_server.go:279] https://192.168.39.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:34:53.507546  701163 api_server.go:103] status: https://192.168.39.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:34:54.003141  701163 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I1025 22:34:54.008592  701163 api_server.go:279] https://192.168.39.136:8443/healthz returned 200:
	ok
	I1025 22:34:54.015158  701163 api_server.go:141] control plane version: v1.24.4
	I1025 22:34:54.015203  701163 api_server.go:131] duration metric: took 5.513125749s to wait for apiserver health ...
	I1025 22:34:54.015214  701163 cni.go:84] Creating CNI manager for ""
	I1025 22:34:54.015221  701163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:34:54.017158  701163 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 22:34:54.018490  701163 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 22:34:54.030897  701163 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 22:34:54.048220  701163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:34:54.048394  701163 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 22:34:54.048433  701163 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 22:34:54.060710  701163 system_pods.go:59] 8 kube-system pods found
	I1025 22:34:54.060758  701163 system_pods.go:61] "coredns-6d4b75cb6d-b82dq" [2d9121fd-1be8-4855-8aaf-3e05683f0d0d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:34:54.060769  701163 system_pods.go:61] "coredns-6d4b75cb6d-jx7ls" [fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:34:54.060777  701163 system_pods.go:61] "etcd-test-preload-416177" [623f1e57-4540-4d95-a002-902cffc3d25c] Running
	I1025 22:34:54.060787  701163 system_pods.go:61] "kube-apiserver-test-preload-416177" [07e8e387-8f90-4551-a557-769a7bcfaf76] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:34:54.060797  701163 system_pods.go:61] "kube-controller-manager-test-preload-416177" [35ebf376-9557-4d38-bf6d-b5acbfc67192] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:34:54.060805  701163 system_pods.go:61] "kube-proxy-fn45p" [aa471af1-ddb1-407e-a720-5977ac4cdebc] Running
	I1025 22:34:54.060812  701163 system_pods.go:61] "kube-scheduler-test-preload-416177" [28843703-d21f-4ed7-bfc9-381cc2031e6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:34:54.060822  701163 system_pods.go:61] "storage-provisioner" [e8bf3eb0-2be9-4b20-bde6-d1ed1d161b4f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 22:34:54.060831  701163 system_pods.go:74] duration metric: took 12.58271ms to wait for pod list to return data ...
	I1025 22:34:54.060851  701163 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:34:54.063970  701163 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:34:54.063995  701163 node_conditions.go:123] node cpu capacity is 2
	I1025 22:34:54.064008  701163 node_conditions.go:105] duration metric: took 3.151839ms to run NodePressure ...
	I1025 22:34:54.064024  701163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:34:54.226197  701163 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1025 22:34:54.235342  701163 kubeadm.go:739] kubelet initialised
	I1025 22:34:54.235364  701163 kubeadm.go:740] duration metric: took 9.139729ms waiting for restarted kubelet to initialise ...
	I1025 22:34:54.235373  701163 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:34:54.244171  701163 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-b82dq" in "kube-system" namespace to be "Ready" ...
	I1025 22:34:54.249781  701163 pod_ready.go:98] node "test-preload-416177" hosting pod "coredns-6d4b75cb6d-b82dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.249806  701163 pod_ready.go:82] duration metric: took 5.610302ms for pod "coredns-6d4b75cb6d-b82dq" in "kube-system" namespace to be "Ready" ...
	E1025 22:34:54.249815  701163 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-416177" hosting pod "coredns-6d4b75cb6d-b82dq" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.249823  701163 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-jx7ls" in "kube-system" namespace to be "Ready" ...
	I1025 22:34:54.255349  701163 pod_ready.go:98] node "test-preload-416177" hosting pod "coredns-6d4b75cb6d-jx7ls" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.255366  701163 pod_ready.go:82] duration metric: took 5.537595ms for pod "coredns-6d4b75cb6d-jx7ls" in "kube-system" namespace to be "Ready" ...
	E1025 22:34:54.255374  701163 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-416177" hosting pod "coredns-6d4b75cb6d-jx7ls" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.255383  701163 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:34:54.258993  701163 pod_ready.go:98] node "test-preload-416177" hosting pod "etcd-test-preload-416177" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.259013  701163 pod_ready.go:82] duration metric: took 3.624341ms for pod "etcd-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	E1025 22:34:54.259021  701163 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-416177" hosting pod "etcd-test-preload-416177" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.259026  701163 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:34:54.452422  701163 pod_ready.go:98] node "test-preload-416177" hosting pod "kube-apiserver-test-preload-416177" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.452453  701163 pod_ready.go:82] duration metric: took 193.413742ms for pod "kube-apiserver-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	E1025 22:34:54.452464  701163 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-416177" hosting pod "kube-apiserver-test-preload-416177" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.452470  701163 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:34:54.852831  701163 pod_ready.go:98] node "test-preload-416177" hosting pod "kube-controller-manager-test-preload-416177" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.852857  701163 pod_ready.go:82] duration metric: took 400.378679ms for pod "kube-controller-manager-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	E1025 22:34:54.852867  701163 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-416177" hosting pod "kube-controller-manager-test-preload-416177" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:54.852874  701163 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-fn45p" in "kube-system" namespace to be "Ready" ...
	I1025 22:34:55.252318  701163 pod_ready.go:98] node "test-preload-416177" hosting pod "kube-proxy-fn45p" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:55.252351  701163 pod_ready.go:82] duration metric: took 399.469223ms for pod "kube-proxy-fn45p" in "kube-system" namespace to be "Ready" ...
	E1025 22:34:55.252374  701163 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-416177" hosting pod "kube-proxy-fn45p" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:55.252384  701163 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:34:55.652636  701163 pod_ready.go:98] node "test-preload-416177" hosting pod "kube-scheduler-test-preload-416177" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:55.652664  701163 pod_ready.go:82] duration metric: took 400.272849ms for pod "kube-scheduler-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	E1025 22:34:55.652674  701163 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-416177" hosting pod "kube-scheduler-test-preload-416177" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:55.652681  701163 pod_ready.go:39] duration metric: took 1.417299064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:34:55.652697  701163 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:34:55.664558  701163 ops.go:34] apiserver oom_adj: -16
	I1025 22:34:55.664577  701163 kubeadm.go:597] duration metric: took 9.577431324s to restartPrimaryControlPlane
	I1025 22:34:55.664586  701163 kubeadm.go:394] duration metric: took 9.625399556s to StartCluster
	I1025 22:34:55.664604  701163 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:34:55.664682  701163 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:34:55.665839  701163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:34:55.666129  701163 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.136 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:34:55.666233  701163 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 22:34:55.666335  701163 addons.go:69] Setting storage-provisioner=true in profile "test-preload-416177"
	I1025 22:34:55.666359  701163 addons.go:234] Setting addon storage-provisioner=true in "test-preload-416177"
	I1025 22:34:55.666365  701163 addons.go:69] Setting default-storageclass=true in profile "test-preload-416177"
	I1025 22:34:55.666375  701163 config.go:182] Loaded profile config "test-preload-416177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1025 22:34:55.666391  701163 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-416177"
	W1025 22:34:55.666370  701163 addons.go:243] addon storage-provisioner should already be in state true
	I1025 22:34:55.666457  701163 host.go:66] Checking if "test-preload-416177" exists ...
	I1025 22:34:55.666731  701163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:34:55.666778  701163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:34:55.666818  701163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:34:55.666861  701163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:34:55.667908  701163 out.go:177] * Verifying Kubernetes components...
	I1025 22:34:55.669417  701163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:34:55.681700  701163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41249
	I1025 22:34:55.682028  701163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33033
	I1025 22:34:55.682122  701163 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:34:55.682463  701163 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:34:55.682626  701163 main.go:141] libmachine: Using API Version  1
	I1025 22:34:55.682647  701163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:34:55.682934  701163 main.go:141] libmachine: Using API Version  1
	I1025 22:34:55.682958  701163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:34:55.682976  701163 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:34:55.683276  701163 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:34:55.683448  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetState
	I1025 22:34:55.683539  701163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:34:55.683582  701163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:34:55.686039  701163 kapi.go:59] client config for test-preload-416177: &rest.Config{Host:"https://192.168.39.136:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/client.crt", KeyFile:"/home/jenkins/minikube-integration/19758-661979/.minikube/profiles/test-preload-416177/client.key", CAFile:"/home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 22:34:55.686276  701163 addons.go:234] Setting addon default-storageclass=true in "test-preload-416177"
	W1025 22:34:55.686288  701163 addons.go:243] addon default-storageclass should already be in state true
	I1025 22:34:55.686312  701163 host.go:66] Checking if "test-preload-416177" exists ...
	I1025 22:34:55.686555  701163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:34:55.686594  701163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:34:55.700853  701163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
	I1025 22:34:55.701399  701163 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:34:55.702009  701163 main.go:141] libmachine: Using API Version  1
	I1025 22:34:55.702032  701163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:34:55.702336  701163 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:34:55.702624  701163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I1025 22:34:55.702995  701163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:34:55.703054  701163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:34:55.703076  701163 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:34:55.703618  701163 main.go:141] libmachine: Using API Version  1
	I1025 22:34:55.703643  701163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:34:55.703937  701163 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:34:55.704112  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetState
	I1025 22:34:55.705776  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:34:55.707834  701163 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:34:55.709181  701163 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:34:55.709197  701163 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:34:55.709212  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:55.712443  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:55.712922  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:55.712970  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:55.713154  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:55.713345  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:55.713501  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:55.713642  701163 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/test-preload-416177/id_rsa Username:docker}
	I1025 22:34:55.736269  701163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
	I1025 22:34:55.736884  701163 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:34:55.737473  701163 main.go:141] libmachine: Using API Version  1
	I1025 22:34:55.737502  701163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:34:55.737822  701163 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:34:55.738002  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetState
	I1025 22:34:55.739622  701163 main.go:141] libmachine: (test-preload-416177) Calling .DriverName
	I1025 22:34:55.739811  701163 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:34:55.739840  701163 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:34:55.739860  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHHostname
	I1025 22:34:55.742837  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:55.743169  701163 main.go:141] libmachine: (test-preload-416177) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:49:9b", ip: ""} in network mk-test-preload-416177: {Iface:virbr1 ExpiryTime:2024-10-25 23:34:22 +0000 UTC Type:0 Mac:52:54:00:42:49:9b Iaid: IPaddr:192.168.39.136 Prefix:24 Hostname:test-preload-416177 Clientid:01:52:54:00:42:49:9b}
	I1025 22:34:55.743196  701163 main.go:141] libmachine: (test-preload-416177) DBG | domain test-preload-416177 has defined IP address 192.168.39.136 and MAC address 52:54:00:42:49:9b in network mk-test-preload-416177
	I1025 22:34:55.743319  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHPort
	I1025 22:34:55.743486  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHKeyPath
	I1025 22:34:55.743621  701163 main.go:141] libmachine: (test-preload-416177) Calling .GetSSHUsername
	I1025 22:34:55.743751  701163 sshutil.go:53] new ssh client: &{IP:192.168.39.136 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/test-preload-416177/id_rsa Username:docker}
	I1025 22:34:55.831519  701163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:34:55.850079  701163 node_ready.go:35] waiting up to 6m0s for node "test-preload-416177" to be "Ready" ...
	I1025 22:34:55.914346  701163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:34:56.004702  701163 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:34:56.955279  701163 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.040874514s)
	I1025 22:34:56.955343  701163 main.go:141] libmachine: Making call to close driver server
	I1025 22:34:56.955360  701163 main.go:141] libmachine: (test-preload-416177) Calling .Close
	I1025 22:34:56.955389  701163 main.go:141] libmachine: Making call to close driver server
	I1025 22:34:56.955418  701163 main.go:141] libmachine: (test-preload-416177) Calling .Close
	I1025 22:34:56.955670  701163 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:34:56.955688  701163 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:34:56.955697  701163 main.go:141] libmachine: Making call to close driver server
	I1025 22:34:56.955704  701163 main.go:141] libmachine: (test-preload-416177) Calling .Close
	I1025 22:34:56.955674  701163 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:34:56.955717  701163 main.go:141] libmachine: (test-preload-416177) DBG | Closing plugin on server side
	I1025 22:34:56.955735  701163 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:34:56.955746  701163 main.go:141] libmachine: Making call to close driver server
	I1025 22:34:56.955753  701163 main.go:141] libmachine: (test-preload-416177) Calling .Close
	I1025 22:34:56.955976  701163 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:34:56.955992  701163 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:34:56.955997  701163 main.go:141] libmachine: (test-preload-416177) DBG | Closing plugin on server side
	I1025 22:34:56.956001  701163 main.go:141] libmachine: (test-preload-416177) DBG | Closing plugin on server side
	I1025 22:34:56.956021  701163 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:34:56.956033  701163 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:34:56.964776  701163 main.go:141] libmachine: Making call to close driver server
	I1025 22:34:56.964799  701163 main.go:141] libmachine: (test-preload-416177) Calling .Close
	I1025 22:34:56.965023  701163 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:34:56.965040  701163 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:34:56.965063  701163 main.go:141] libmachine: (test-preload-416177) DBG | Closing plugin on server side
	I1025 22:34:56.967040  701163 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 22:34:56.968423  701163 addons.go:510] duration metric: took 1.302197422s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 22:34:57.853608  701163 node_ready.go:53] node "test-preload-416177" has status "Ready":"False"
	I1025 22:34:59.854808  701163 node_ready.go:53] node "test-preload-416177" has status "Ready":"False"
	I1025 22:35:01.854939  701163 node_ready.go:53] node "test-preload-416177" has status "Ready":"False"
	I1025 22:35:03.353761  701163 node_ready.go:49] node "test-preload-416177" has status "Ready":"True"
	I1025 22:35:03.353786  701163 node_ready.go:38] duration metric: took 7.503668035s for node "test-preload-416177" to be "Ready" ...
	I1025 22:35:03.353796  701163 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:35:03.359419  701163 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-jx7ls" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:03.364268  701163 pod_ready.go:93] pod "coredns-6d4b75cb6d-jx7ls" in "kube-system" namespace has status "Ready":"True"
	I1025 22:35:03.364291  701163 pod_ready.go:82] duration metric: took 4.836991ms for pod "coredns-6d4b75cb6d-jx7ls" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:03.364299  701163 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:05.370776  701163 pod_ready.go:103] pod "etcd-test-preload-416177" in "kube-system" namespace has status "Ready":"False"
	I1025 22:35:06.869992  701163 pod_ready.go:93] pod "etcd-test-preload-416177" in "kube-system" namespace has status "Ready":"True"
	I1025 22:35:06.870017  701163 pod_ready.go:82] duration metric: took 3.505711117s for pod "etcd-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:06.870027  701163 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:06.874857  701163 pod_ready.go:93] pod "kube-apiserver-test-preload-416177" in "kube-system" namespace has status "Ready":"True"
	I1025 22:35:06.874884  701163 pod_ready.go:82] duration metric: took 4.849751ms for pod "kube-apiserver-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:06.874894  701163 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:06.879543  701163 pod_ready.go:93] pod "kube-controller-manager-test-preload-416177" in "kube-system" namespace has status "Ready":"True"
	I1025 22:35:06.879563  701163 pod_ready.go:82] duration metric: took 4.663431ms for pod "kube-controller-manager-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:06.879573  701163 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fn45p" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:06.884915  701163 pod_ready.go:93] pod "kube-proxy-fn45p" in "kube-system" namespace has status "Ready":"True"
	I1025 22:35:06.884939  701163 pod_ready.go:82] duration metric: took 5.359579ms for pod "kube-proxy-fn45p" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:06.884965  701163 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:06.955314  701163 pod_ready.go:93] pod "kube-scheduler-test-preload-416177" in "kube-system" namespace has status "Ready":"True"
	I1025 22:35:06.955341  701163 pod_ready.go:82] duration metric: took 70.364012ms for pod "kube-scheduler-test-preload-416177" in "kube-system" namespace to be "Ready" ...
	I1025 22:35:06.955353  701163 pod_ready.go:39] duration metric: took 3.601548059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:35:06.955371  701163 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:35:06.955434  701163 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:35:06.971515  701163 api_server.go:72] duration metric: took 11.305341235s to wait for apiserver process to appear ...
	I1025 22:35:06.971548  701163 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:35:06.971573  701163 api_server.go:253] Checking apiserver healthz at https://192.168.39.136:8443/healthz ...
	I1025 22:35:06.976890  701163 api_server.go:279] https://192.168.39.136:8443/healthz returned 200:
	ok
	I1025 22:35:06.977848  701163 api_server.go:141] control plane version: v1.24.4
	I1025 22:35:06.977870  701163 api_server.go:131] duration metric: took 6.315168ms to wait for apiserver health ...
	I1025 22:35:06.977878  701163 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:35:07.155423  701163 system_pods.go:59] 7 kube-system pods found
	I1025 22:35:07.155466  701163 system_pods.go:61] "coredns-6d4b75cb6d-jx7ls" [fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40] Running
	I1025 22:35:07.155471  701163 system_pods.go:61] "etcd-test-preload-416177" [623f1e57-4540-4d95-a002-902cffc3d25c] Running
	I1025 22:35:07.155476  701163 system_pods.go:61] "kube-apiserver-test-preload-416177" [07e8e387-8f90-4551-a557-769a7bcfaf76] Running
	I1025 22:35:07.155479  701163 system_pods.go:61] "kube-controller-manager-test-preload-416177" [35ebf376-9557-4d38-bf6d-b5acbfc67192] Running
	I1025 22:35:07.155483  701163 system_pods.go:61] "kube-proxy-fn45p" [aa471af1-ddb1-407e-a720-5977ac4cdebc] Running
	I1025 22:35:07.155486  701163 system_pods.go:61] "kube-scheduler-test-preload-416177" [28843703-d21f-4ed7-bfc9-381cc2031e6f] Running
	I1025 22:35:07.155489  701163 system_pods.go:61] "storage-provisioner" [e8bf3eb0-2be9-4b20-bde6-d1ed1d161b4f] Running
	I1025 22:35:07.155495  701163 system_pods.go:74] duration metric: took 177.611337ms to wait for pod list to return data ...
	I1025 22:35:07.155505  701163 default_sa.go:34] waiting for default service account to be created ...
	I1025 22:35:07.354020  701163 default_sa.go:45] found service account: "default"
	I1025 22:35:07.354049  701163 default_sa.go:55] duration metric: took 198.537705ms for default service account to be created ...
	I1025 22:35:07.354060  701163 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 22:35:07.555840  701163 system_pods.go:86] 7 kube-system pods found
	I1025 22:35:07.555871  701163 system_pods.go:89] "coredns-6d4b75cb6d-jx7ls" [fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40] Running
	I1025 22:35:07.555876  701163 system_pods.go:89] "etcd-test-preload-416177" [623f1e57-4540-4d95-a002-902cffc3d25c] Running
	I1025 22:35:07.555880  701163 system_pods.go:89] "kube-apiserver-test-preload-416177" [07e8e387-8f90-4551-a557-769a7bcfaf76] Running
	I1025 22:35:07.555883  701163 system_pods.go:89] "kube-controller-manager-test-preload-416177" [35ebf376-9557-4d38-bf6d-b5acbfc67192] Running
	I1025 22:35:07.555886  701163 system_pods.go:89] "kube-proxy-fn45p" [aa471af1-ddb1-407e-a720-5977ac4cdebc] Running
	I1025 22:35:07.555896  701163 system_pods.go:89] "kube-scheduler-test-preload-416177" [28843703-d21f-4ed7-bfc9-381cc2031e6f] Running
	I1025 22:35:07.555899  701163 system_pods.go:89] "storage-provisioner" [e8bf3eb0-2be9-4b20-bde6-d1ed1d161b4f] Running
	I1025 22:35:07.555908  701163 system_pods.go:126] duration metric: took 201.841736ms to wait for k8s-apps to be running ...
	I1025 22:35:07.555916  701163 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 22:35:07.555961  701163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:35:07.570382  701163 system_svc.go:56] duration metric: took 14.445073ms WaitForService to wait for kubelet
	I1025 22:35:07.570422  701163 kubeadm.go:582] duration metric: took 11.90425493s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 22:35:07.570448  701163 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:35:07.753965  701163 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:35:07.753992  701163 node_conditions.go:123] node cpu capacity is 2
	I1025 22:35:07.754004  701163 node_conditions.go:105] duration metric: took 183.550373ms to run NodePressure ...
	I1025 22:35:07.754021  701163 start.go:241] waiting for startup goroutines ...
	I1025 22:35:07.754030  701163 start.go:246] waiting for cluster config update ...
	I1025 22:35:07.754040  701163 start.go:255] writing updated cluster config ...
	I1025 22:35:07.754302  701163 ssh_runner.go:195] Run: rm -f paused
	I1025 22:35:07.802539  701163 start.go:600] kubectl: 1.31.2, cluster: 1.24.4 (minor skew: 7)
	I1025 22:35:07.804586  701163 out.go:201] 
	W1025 22:35:07.805894  701163 out.go:270] ! /usr/local/bin/kubectl is version 1.31.2, which may have incompatibilities with Kubernetes 1.24.4.
	I1025 22:35:07.807095  701163 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1025 22:35:07.808423  701163 out.go:177] * Done! kubectl is now configured to use "test-preload-416177" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.668745525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729895708668723993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad200111-e382-4b11-a5a9-0bee65f3b5dd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.669382939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a478ccd-64a0-4051-b46a-02ee03273286 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.669463939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a478ccd-64a0-4051-b46a-02ee03273286 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.669658587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3616b3feb7bbb84955f355537fa0af0fea8a1f5c1d16d797149fefdcf325cb6c,PodSandboxId:482f422910c466ff3718682b53fcc806726b421ad7164f5b985fc85eff68120e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1729895701899912337,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jx7ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40,},Annotations:map[string]string{io.kubernetes.container.hash: fb56ef60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5569c9c3193f4b4164a38aa6674a31b6e29967ea49032c76a0ad82f5737218,PodSandboxId:00e9b86b771beb33761eb55e8aff7e5452b4bfab514c436a0d582dac33f593a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729895694723822169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e8bf3eb0-2be9-4b20-bde6-d1ed1d161b4f,},Annotations:map[string]string{io.kubernetes.container.hash: f3ca9bc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013793f687b7e3f2bd3090a34e95373fc231ee861bb3a07a24c4029cb9237010,PodSandboxId:3b6925d1ba83d72e7954c99914328a0f2f6e85e02fa484da2aa74bf870921980,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1729895694411490273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fn45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
471af1-ddb1-407e-a720-5977ac4cdebc,},Annotations:map[string]string{io.kubernetes.container.hash: 7064eb55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f95b4caad97ef1b6660cc0a297a1a971a4d664d42f536d2fbfce5a1fde31f4,PodSandboxId:84d30758c5fad18b41c80e8384417da2e481936a3844b3f31b43029b300f2407,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1729895688193569258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b162d2850d8b0cb505fb5f177deacc51,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9c650970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38afe0f841985b93dc3e0c5f6e3aa5eabd75f572f334b5f20072a1619cc160a1,PodSandboxId:824d2ee9b223ef52e3a1eb2292f319610367060e64789d5b4dcacf2c615bfa06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1729895688151164093,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba593b7f548febb92e47896f3030f41a,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752b2af3738f9efc24302afccbe08009b65af43ccddf81a6660af6c0a0d53850,PodSandboxId:12cc1c4bc989f929e5f2c7a05310aec6251940f3079788f3490613518a750466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1729895688160561693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf71933043a687075544c900134b06c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: f805f78f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f41448a2d215e0284618645b7caeae6624d3a17554f1e4be47582bcffbab92b,PodSandboxId:ba6abd1e8c8952d1c48eb51ed6bd8a1558a02b659e640c6cbd727ee4db850f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1729895688063283764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431953c9fffb96fb5a8e98f3fa2c70a,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a478ccd-64a0-4051-b46a-02ee03273286 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.708691619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b76496f0-b4c8-4158-a0c0-bd6e537ea5cc name=/runtime.v1.RuntimeService/Version
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.708765094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b76496f0-b4c8-4158-a0c0-bd6e537ea5cc name=/runtime.v1.RuntimeService/Version
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.709980516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e5f34c4-2d97-44cb-bb78-7daba9b67727 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.710494321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729895708710470136,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e5f34c4-2d97-44cb-bb78-7daba9b67727 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.711196600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34e9cdbb-9de4-4ad9-95c1-dfd5dd1820d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.711268571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34e9cdbb-9de4-4ad9-95c1-dfd5dd1820d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.711455224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3616b3feb7bbb84955f355537fa0af0fea8a1f5c1d16d797149fefdcf325cb6c,PodSandboxId:482f422910c466ff3718682b53fcc806726b421ad7164f5b985fc85eff68120e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1729895701899912337,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jx7ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40,},Annotations:map[string]string{io.kubernetes.container.hash: fb56ef60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5569c9c3193f4b4164a38aa6674a31b6e29967ea49032c76a0ad82f5737218,PodSandboxId:00e9b86b771beb33761eb55e8aff7e5452b4bfab514c436a0d582dac33f593a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729895694723822169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e8bf3eb0-2be9-4b20-bde6-d1ed1d161b4f,},Annotations:map[string]string{io.kubernetes.container.hash: f3ca9bc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013793f687b7e3f2bd3090a34e95373fc231ee861bb3a07a24c4029cb9237010,PodSandboxId:3b6925d1ba83d72e7954c99914328a0f2f6e85e02fa484da2aa74bf870921980,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1729895694411490273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fn45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
471af1-ddb1-407e-a720-5977ac4cdebc,},Annotations:map[string]string{io.kubernetes.container.hash: 7064eb55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f95b4caad97ef1b6660cc0a297a1a971a4d664d42f536d2fbfce5a1fde31f4,PodSandboxId:84d30758c5fad18b41c80e8384417da2e481936a3844b3f31b43029b300f2407,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1729895688193569258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b162d2850d8b0cb505fb5f177deacc51,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9c650970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38afe0f841985b93dc3e0c5f6e3aa5eabd75f572f334b5f20072a1619cc160a1,PodSandboxId:824d2ee9b223ef52e3a1eb2292f319610367060e64789d5b4dcacf2c615bfa06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1729895688151164093,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba593b7f548febb92e47896f3030f41a,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752b2af3738f9efc24302afccbe08009b65af43ccddf81a6660af6c0a0d53850,PodSandboxId:12cc1c4bc989f929e5f2c7a05310aec6251940f3079788f3490613518a750466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1729895688160561693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf71933043a687075544c900134b06c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: f805f78f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f41448a2d215e0284618645b7caeae6624d3a17554f1e4be47582bcffbab92b,PodSandboxId:ba6abd1e8c8952d1c48eb51ed6bd8a1558a02b659e640c6cbd727ee4db850f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1729895688063283764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431953c9fffb96fb5a8e98f3fa2c70a,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34e9cdbb-9de4-4ad9-95c1-dfd5dd1820d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.747534269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d616f6ed-4415-41e5-b018-39013ba7f28c name=/runtime.v1.RuntimeService/Version
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.747626818Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d616f6ed-4415-41e5-b018-39013ba7f28c name=/runtime.v1.RuntimeService/Version
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.748448474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd0688b1-d3ec-492e-8a28-963862b6665f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.749074235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729895708749051573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd0688b1-d3ec-492e-8a28-963862b6665f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.749554732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d90d7ec0-bb3b-418b-a93c-ce7328109e28 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.749623282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d90d7ec0-bb3b-418b-a93c-ce7328109e28 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.749812734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3616b3feb7bbb84955f355537fa0af0fea8a1f5c1d16d797149fefdcf325cb6c,PodSandboxId:482f422910c466ff3718682b53fcc806726b421ad7164f5b985fc85eff68120e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1729895701899912337,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jx7ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40,},Annotations:map[string]string{io.kubernetes.container.hash: fb56ef60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5569c9c3193f4b4164a38aa6674a31b6e29967ea49032c76a0ad82f5737218,PodSandboxId:00e9b86b771beb33761eb55e8aff7e5452b4bfab514c436a0d582dac33f593a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729895694723822169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e8bf3eb0-2be9-4b20-bde6-d1ed1d161b4f,},Annotations:map[string]string{io.kubernetes.container.hash: f3ca9bc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013793f687b7e3f2bd3090a34e95373fc231ee861bb3a07a24c4029cb9237010,PodSandboxId:3b6925d1ba83d72e7954c99914328a0f2f6e85e02fa484da2aa74bf870921980,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1729895694411490273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fn45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
471af1-ddb1-407e-a720-5977ac4cdebc,},Annotations:map[string]string{io.kubernetes.container.hash: 7064eb55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f95b4caad97ef1b6660cc0a297a1a971a4d664d42f536d2fbfce5a1fde31f4,PodSandboxId:84d30758c5fad18b41c80e8384417da2e481936a3844b3f31b43029b300f2407,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1729895688193569258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b162d2850d8b0cb505fb5f177deacc51,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9c650970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38afe0f841985b93dc3e0c5f6e3aa5eabd75f572f334b5f20072a1619cc160a1,PodSandboxId:824d2ee9b223ef52e3a1eb2292f319610367060e64789d5b4dcacf2c615bfa06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1729895688151164093,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba593b7f548febb92e47896f3030f41a,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752b2af3738f9efc24302afccbe08009b65af43ccddf81a6660af6c0a0d53850,PodSandboxId:12cc1c4bc989f929e5f2c7a05310aec6251940f3079788f3490613518a750466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1729895688160561693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf71933043a687075544c900134b06c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: f805f78f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f41448a2d215e0284618645b7caeae6624d3a17554f1e4be47582bcffbab92b,PodSandboxId:ba6abd1e8c8952d1c48eb51ed6bd8a1558a02b659e640c6cbd727ee4db850f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1729895688063283764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431953c9fffb96fb5a8e98f3fa2c70a,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d90d7ec0-bb3b-418b-a93c-ce7328109e28 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.782375710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2822447b-7052-46da-a57a-5ce037f7f3dc name=/runtime.v1.RuntimeService/Version
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.782465286Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2822447b-7052-46da-a57a-5ce037f7f3dc name=/runtime.v1.RuntimeService/Version
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.783428933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af714657-e066-4f98-9412-a673562649f2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.784118135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729895708784096980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af714657-e066-4f98-9412-a673562649f2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.784625233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee4fcb8b-e9fb-42b7-a280-9e37463330bb name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.784700239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee4fcb8b-e9fb-42b7-a280-9e37463330bb name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:35:08 test-preload-416177 crio[679]: time="2024-10-25 22:35:08.784928015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3616b3feb7bbb84955f355537fa0af0fea8a1f5c1d16d797149fefdcf325cb6c,PodSandboxId:482f422910c466ff3718682b53fcc806726b421ad7164f5b985fc85eff68120e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1729895701899912337,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-jx7ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40,},Annotations:map[string]string{io.kubernetes.container.hash: fb56ef60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5569c9c3193f4b4164a38aa6674a31b6e29967ea49032c76a0ad82f5737218,PodSandboxId:00e9b86b771beb33761eb55e8aff7e5452b4bfab514c436a0d582dac33f593a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729895694723822169,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e8bf3eb0-2be9-4b20-bde6-d1ed1d161b4f,},Annotations:map[string]string{io.kubernetes.container.hash: f3ca9bc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:013793f687b7e3f2bd3090a34e95373fc231ee861bb3a07a24c4029cb9237010,PodSandboxId:3b6925d1ba83d72e7954c99914328a0f2f6e85e02fa484da2aa74bf870921980,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1729895694411490273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fn45p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa
471af1-ddb1-407e-a720-5977ac4cdebc,},Annotations:map[string]string{io.kubernetes.container.hash: 7064eb55,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f95b4caad97ef1b6660cc0a297a1a971a4d664d42f536d2fbfce5a1fde31f4,PodSandboxId:84d30758c5fad18b41c80e8384417da2e481936a3844b3f31b43029b300f2407,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1729895688193569258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b162d2850d8b0cb505fb5f177deacc51,},Anno
tations:map[string]string{io.kubernetes.container.hash: 9c650970,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38afe0f841985b93dc3e0c5f6e3aa5eabd75f572f334b5f20072a1619cc160a1,PodSandboxId:824d2ee9b223ef52e3a1eb2292f319610367060e64789d5b4dcacf2c615bfa06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1729895688151164093,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba593b7f548febb92e47896f3030f41a,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:752b2af3738f9efc24302afccbe08009b65af43ccddf81a6660af6c0a0d53850,PodSandboxId:12cc1c4bc989f929e5f2c7a05310aec6251940f3079788f3490613518a750466,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1729895688160561693,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcf71933043a687075544c900134b06c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: f805f78f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f41448a2d215e0284618645b7caeae6624d3a17554f1e4be47582bcffbab92b,PodSandboxId:ba6abd1e8c8952d1c48eb51ed6bd8a1558a02b659e640c6cbd727ee4db850f8c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1729895688063283764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-416177,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0431953c9fffb96fb5a8e98f3fa2c70a,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee4fcb8b-e9fb-42b7-a280-9e37463330bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3616b3feb7bbb       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   482f422910c46       coredns-6d4b75cb6d-jx7ls
	df5569c9c3193       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   00e9b86b771be       storage-provisioner
	013793f687b7e       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   3b6925d1ba83d       kube-proxy-fn45p
	91f95b4caad97       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   84d30758c5fad       etcd-test-preload-416177
	752b2af3738f9       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   12cc1c4bc989f       kube-apiserver-test-preload-416177
	38afe0f841985       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   824d2ee9b223e       kube-scheduler-test-preload-416177
	0f41448a2d215       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   ba6abd1e8c895       kube-controller-manager-test-preload-416177
	
	
	==> coredns [3616b3feb7bbb84955f355537fa0af0fea8a1f5c1d16d797149fefdcf325cb6c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38151 - 2393 "HINFO IN 8839909194768979993.1384573142267752954. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015842805s
	
	
	==> describe nodes <==
	Name:               test-preload-416177
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-416177
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc
	                    minikube.k8s.io/name=test-preload-416177
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_25T22_33_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Oct 2024 22:33:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-416177
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Oct 2024 22:35:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Oct 2024 22:35:02 +0000   Fri, 25 Oct 2024 22:33:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Oct 2024 22:35:02 +0000   Fri, 25 Oct 2024 22:33:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Oct 2024 22:35:02 +0000   Fri, 25 Oct 2024 22:33:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 25 Oct 2024 22:35:02 +0000   Fri, 25 Oct 2024 22:35:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.136
	  Hostname:    test-preload-416177
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9addd313020f43068812b28673f92bdc
	  System UUID:                9addd313-020f-4306-8812-b28673f92bdc
	  Boot ID:                    6914df7d-80e8-4da7-af1c-03ce1df3a929
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-jx7ls                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     86s
	  kube-system                 etcd-test-preload-416177                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         100s
	  kube-system                 kube-apiserver-test-preload-416177             250m (12%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-controller-manager-test-preload-416177    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-fn45p                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-test-preload-416177             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 83s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  107s (x5 over 107s)  kubelet          Node test-preload-416177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x5 over 107s)  kubelet          Node test-preload-416177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x4 over 107s)  kubelet          Node test-preload-416177 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  99s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                  kubelet          Node test-preload-416177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet          Node test-preload-416177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet          Node test-preload-416177 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s                  kubelet          Node test-preload-416177 status is now: NodeReady
	  Normal  RegisteredNode           86s                  node-controller  Node test-preload-416177 event: Registered Node test-preload-416177 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-416177 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-416177 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-416177 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-416177 event: Registered Node test-preload-416177 in Controller
	
	
	==> dmesg <==
	[Oct25 22:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050546] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040171] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.834195] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.414167] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597179] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.925991] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.058555] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058836] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.167171] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.132650] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.280224] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[ +13.422279] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.062882] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.696428] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +5.494890] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.033525] systemd-fstab-generator[1766]: Ignoring "noauto" option for root device
	[Oct25 22:35] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [91f95b4caad97ef1b6660cc0a297a1a971a4d664d42f536d2fbfce5a1fde31f4] <==
	{"level":"info","ts":"2024-10-25T22:34:48.571Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"32f03a72bea6354e","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-25T22:34:48.573Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-25T22:34:48.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e switched to configuration voters=(3670497960806200654)"}
	{"level":"info","ts":"2024-10-25T22:34:48.579Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6fc8639e731f3dca","local-member-id":"32f03a72bea6354e","added-peer-id":"32f03a72bea6354e","added-peer-peer-urls":["https://192.168.39.136:2380"]}
	{"level":"info","ts":"2024-10-25T22:34:48.579Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6fc8639e731f3dca","local-member-id":"32f03a72bea6354e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T22:34:48.580Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T22:34:48.582Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-25T22:34:48.585Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"32f03a72bea6354e","initial-advertise-peer-urls":["https://192.168.39.136:2380"],"listen-peer-urls":["https://192.168.39.136:2380"],"advertise-client-urls":["https://192.168.39.136:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.136:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-25T22:34:48.585Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.136:2380"}
	{"level":"info","ts":"2024-10-25T22:34:48.587Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.136:2380"}
	{"level":"info","ts":"2024-10-25T22:34:48.586Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-25T22:34:50.329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-25T22:34:50.329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-25T22:34:50.329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e received MsgPreVoteResp from 32f03a72bea6354e at term 2"}
	{"level":"info","ts":"2024-10-25T22:34:50.329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e became candidate at term 3"}
	{"level":"info","ts":"2024-10-25T22:34:50.329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e received MsgVoteResp from 32f03a72bea6354e at term 3"}
	{"level":"info","ts":"2024-10-25T22:34:50.329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"32f03a72bea6354e became leader at term 3"}
	{"level":"info","ts":"2024-10-25T22:34:50.329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 32f03a72bea6354e elected leader 32f03a72bea6354e at term 3"}
	{"level":"info","ts":"2024-10-25T22:34:50.337Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"32f03a72bea6354e","local-member-attributes":"{Name:test-preload-416177 ClientURLs:[https://192.168.39.136:2379]}","request-path":"/0/members/32f03a72bea6354e/attributes","cluster-id":"6fc8639e731f3dca","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-25T22:34:50.337Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-25T22:34:50.338Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-25T22:34:50.339Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-25T22:34:50.339Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-25T22:34:50.339Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-25T22:34:50.340Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.136:2379"}
	
	
	==> kernel <==
	 22:35:09 up 0 min,  0 users,  load average: 0.65, 0.18, 0.06
	Linux test-preload-416177 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [752b2af3738f9efc24302afccbe08009b65af43ccddf81a6660af6c0a0d53850] <==
	I1025 22:34:52.741008       1 controller.go:85] Starting OpenAPI V3 controller
	I1025 22:34:52.741067       1 naming_controller.go:291] Starting NamingConditionController
	I1025 22:34:52.741111       1 establishing_controller.go:76] Starting EstablishingController
	I1025 22:34:52.741157       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1025 22:34:52.741193       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1025 22:34:52.741231       1 crd_finalizer.go:266] Starting CRDFinalizer
	E1025 22:34:52.804180       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1025 22:34:52.804446       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1025 22:34:52.814179       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1025 22:34:52.883659       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1025 22:34:52.892771       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 22:34:52.892969       1 cache.go:39] Caches are synced for autoregister controller
	I1025 22:34:52.893139       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1025 22:34:52.893270       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 22:34:52.893530       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 22:34:53.330099       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1025 22:34:53.696310       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 22:34:54.119789       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1025 22:34:54.129043       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1025 22:34:54.162753       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1025 22:34:54.179819       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 22:34:54.186361       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 22:34:54.724472       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1025 22:35:05.310376       1 controller.go:611] quota admission added evaluator for: endpoints
	I1025 22:35:05.449266       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0f41448a2d215e0284618645b7caeae6624d3a17554f1e4be47582bcffbab92b] <==
	I1025 22:35:05.272821       1 shared_informer.go:262] Caches are synced for attach detach
	I1025 22:35:05.275570       1 shared_informer.go:262] Caches are synced for ephemeral
	I1025 22:35:05.278896       1 shared_informer.go:262] Caches are synced for daemon sets
	I1025 22:35:05.280061       1 shared_informer.go:262] Caches are synced for job
	I1025 22:35:05.282449       1 shared_informer.go:262] Caches are synced for deployment
	I1025 22:35:05.289893       1 shared_informer.go:262] Caches are synced for endpoint
	I1025 22:35:05.292909       1 shared_informer.go:262] Caches are synced for GC
	I1025 22:35:05.294988       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1025 22:35:05.300016       1 shared_informer.go:262] Caches are synced for taint
	I1025 22:35:05.300133       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1025 22:35:05.300225       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-416177. Assuming now as a timestamp.
	I1025 22:35:05.300268       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1025 22:35:05.300317       1 shared_informer.go:262] Caches are synced for persistent volume
	I1025 22:35:05.300489       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1025 22:35:05.300670       1 event.go:294] "Event occurred" object="test-preload-416177" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-416177 event: Registered Node test-preload-416177 in Controller"
	I1025 22:35:05.302916       1 shared_informer.go:262] Caches are synced for PVC protection
	I1025 22:35:05.306561       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1025 22:35:05.331034       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 22:35:05.337312       1 shared_informer.go:262] Caches are synced for stateful set
	I1025 22:35:05.349990       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1025 22:35:05.357265       1 shared_informer.go:262] Caches are synced for HPA
	I1025 22:35:05.370543       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 22:35:05.801885       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 22:35:05.819389       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 22:35:05.819415       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [013793f687b7e3f2bd3090a34e95373fc231ee861bb3a07a24c4029cb9237010] <==
	I1025 22:34:54.649042       1 node.go:163] Successfully retrieved node IP: 192.168.39.136
	I1025 22:34:54.649153       1 server_others.go:138] "Detected node IP" address="192.168.39.136"
	I1025 22:34:54.649265       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1025 22:34:54.709930       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1025 22:34:54.709963       1 server_others.go:206] "Using iptables Proxier"
	I1025 22:34:54.710000       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 22:34:54.710568       1 server.go:661] "Version info" version="v1.24.4"
	I1025 22:34:54.710598       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 22:34:54.716711       1 config.go:317] "Starting service config controller"
	I1025 22:34:54.717101       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1025 22:34:54.717148       1 config.go:226] "Starting endpoint slice config controller"
	I1025 22:34:54.717154       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1025 22:34:54.719979       1 config.go:444] "Starting node config controller"
	I1025 22:34:54.720004       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1025 22:34:54.817616       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1025 22:34:54.817672       1 shared_informer.go:262] Caches are synced for service config
	I1025 22:34:54.820287       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [38afe0f841985b93dc3e0c5f6e3aa5eabd75f572f334b5f20072a1619cc160a1] <==
	I1025 22:34:49.001568       1 serving.go:348] Generated self-signed cert in-memory
	W1025 22:34:52.750945       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 22:34:52.751438       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 22:34:52.751480       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 22:34:52.751493       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 22:34:52.797825       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1025 22:34:52.797928       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 22:34:52.806484       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1025 22:34:52.806684       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 22:34:52.806745       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 22:34:52.806780       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1025 22:34:52.907485       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.541159    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa471af1-ddb1-407e-a720-5977ac4cdebc-kube-proxy\") pod \"kube-proxy-fn45p\" (UID: \"aa471af1-ddb1-407e-a720-5977ac4cdebc\") " pod="kube-system/kube-proxy-fn45p"
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.541218    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa471af1-ddb1-407e-a720-5977ac4cdebc-xtables-lock\") pod \"kube-proxy-fn45p\" (UID: \"aa471af1-ddb1-407e-a720-5977ac4cdebc\") " pod="kube-system/kube-proxy-fn45p"
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.541268    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvc4v\" (UniqueName: \"kubernetes.io/projected/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-kube-api-access-mvc4v\") pod \"coredns-6d4b75cb6d-jx7ls\" (UID: \"fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40\") " pod="kube-system/coredns-6d4b75cb6d-jx7ls"
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.541330    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-config-volume\") pod \"coredns-6d4b75cb6d-jx7ls\" (UID: \"fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40\") " pod="kube-system/coredns-6d4b75cb6d-jx7ls"
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.541374    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa471af1-ddb1-407e-a720-5977ac4cdebc-lib-modules\") pod \"kube-proxy-fn45p\" (UID: \"aa471af1-ddb1-407e-a720-5977ac4cdebc\") " pod="kube-system/kube-proxy-fn45p"
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.541504    1138 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9sdb\" (UniqueName: \"kubernetes.io/projected/aa471af1-ddb1-407e-a720-5977ac4cdebc-kube-api-access-l9sdb\") pod \"kube-proxy-fn45p\" (UID: \"aa471af1-ddb1-407e-a720-5977ac4cdebc\") " pod="kube-system/kube-proxy-fn45p"
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.541701    1138 reconciler.go:159] "Reconciler: start to sync state"
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.889033    1138 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh2pw\" (UniqueName: \"kubernetes.io/projected/2d9121fd-1be8-4855-8aaf-3e05683f0d0d-kube-api-access-bh2pw\") pod \"2d9121fd-1be8-4855-8aaf-3e05683f0d0d\" (UID: \"2d9121fd-1be8-4855-8aaf-3e05683f0d0d\") "
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.889090    1138 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9121fd-1be8-4855-8aaf-3e05683f0d0d-config-volume\") pod \"2d9121fd-1be8-4855-8aaf-3e05683f0d0d\" (UID: \"2d9121fd-1be8-4855-8aaf-3e05683f0d0d\") "
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: E1025 22:34:53.889556    1138 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: E1025 22:34:53.889686    1138 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-config-volume podName:fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40 nodeName:}" failed. No retries permitted until 2024-10-25 22:34:54.389615385 +0000 UTC m=+7.144208342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-config-volume") pod "coredns-6d4b75cb6d-jx7ls" (UID: "fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40") : object "kube-system"/"coredns" not registered
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: W1025 22:34:53.890710    1138 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/2d9121fd-1be8-4855-8aaf-3e05683f0d0d/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: W1025 22:34:53.891045    1138 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/2d9121fd-1be8-4855-8aaf-3e05683f0d0d/volumes/kubernetes.io~projected/kube-api-access-bh2pw: clearQuota called, but quotas disabled
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.891168    1138 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9121fd-1be8-4855-8aaf-3e05683f0d0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d9121fd-1be8-4855-8aaf-3e05683f0d0d" (UID: "2d9121fd-1be8-4855-8aaf-3e05683f0d0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.891491    1138 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9121fd-1be8-4855-8aaf-3e05683f0d0d-kube-api-access-bh2pw" (OuterVolumeSpecName: "kube-api-access-bh2pw") pod "2d9121fd-1be8-4855-8aaf-3e05683f0d0d" (UID: "2d9121fd-1be8-4855-8aaf-3e05683f0d0d"). InnerVolumeSpecName "kube-api-access-bh2pw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.989972    1138 reconciler.go:384] "Volume detached for volume \"kube-api-access-bh2pw\" (UniqueName: \"kubernetes.io/projected/2d9121fd-1be8-4855-8aaf-3e05683f0d0d-kube-api-access-bh2pw\") on node \"test-preload-416177\" DevicePath \"\""
	Oct 25 22:34:53 test-preload-416177 kubelet[1138]: I1025 22:34:53.990004    1138 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d9121fd-1be8-4855-8aaf-3e05683f0d0d-config-volume\") on node \"test-preload-416177\" DevicePath \"\""
	Oct 25 22:34:54 test-preload-416177 kubelet[1138]: E1025 22:34:54.394023    1138 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 22:34:54 test-preload-416177 kubelet[1138]: E1025 22:34:54.394084    1138 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-config-volume podName:fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40 nodeName:}" failed. No retries permitted until 2024-10-25 22:34:55.394068894 +0000 UTC m=+8.148661839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-config-volume") pod "coredns-6d4b75cb6d-jx7ls" (UID: "fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40") : object "kube-system"/"coredns" not registered
	Oct 25 22:34:55 test-preload-416177 kubelet[1138]: E1025 22:34:55.400444    1138 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 22:34:55 test-preload-416177 kubelet[1138]: E1025 22:34:55.400897    1138 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-config-volume podName:fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40 nodeName:}" failed. No retries permitted until 2024-10-25 22:34:57.400815605 +0000 UTC m=+10.155408551 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-config-volume") pod "coredns-6d4b75cb6d-jx7ls" (UID: "fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40") : object "kube-system"/"coredns" not registered
	Oct 25 22:34:55 test-preload-416177 kubelet[1138]: E1025 22:34:55.485590    1138 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-jx7ls" podUID=fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40
	Oct 25 22:34:55 test-preload-416177 kubelet[1138]: I1025 22:34:55.490964    1138 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2d9121fd-1be8-4855-8aaf-3e05683f0d0d path="/var/lib/kubelet/pods/2d9121fd-1be8-4855-8aaf-3e05683f0d0d/volumes"
	Oct 25 22:34:57 test-preload-416177 kubelet[1138]: E1025 22:34:57.415594    1138 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 25 22:34:57 test-preload-416177 kubelet[1138]: E1025 22:34:57.415689    1138 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-config-volume podName:fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40 nodeName:}" failed. No retries permitted until 2024-10-25 22:35:01.415674166 +0000 UTC m=+14.170267124 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40-config-volume") pod "coredns-6d4b75cb6d-jx7ls" (UID: "fe4538ee-d7e3-4826-8d7a-fa1b7c5f6e40") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [df5569c9c3193f4b4164a38aa6674a31b6e29967ea49032c76a0ad82f5737218] <==
	I1025 22:34:54.823790       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-416177 -n test-preload-416177
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-416177 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-416177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-416177
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-416177: (1.147649104s)
--- FAIL: TestPreload (171.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-234842 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-234842 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m36.468925301s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-234842] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19758
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-234842" primary control-plane node in "kubernetes-upgrade-234842" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:41:03.648919  708411 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:41:03.649083  708411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:41:03.649093  708411 out.go:358] Setting ErrFile to fd 2...
	I1025 22:41:03.649098  708411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:41:03.649260  708411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:41:03.649820  708411 out.go:352] Setting JSON to false
	I1025 22:41:03.650857  708411 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":19408,"bootTime":1729876656,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:41:03.650957  708411 start.go:139] virtualization: kvm guest
	I1025 22:41:03.653212  708411 out.go:177] * [kubernetes-upgrade-234842] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:41:03.654979  708411 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 22:41:03.655000  708411 notify.go:220] Checking for updates...
	I1025 22:41:03.657948  708411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:41:03.659219  708411 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:41:03.660504  708411 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:41:03.661874  708411 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:41:03.663220  708411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:41:03.665440  708411 config.go:182] Loaded profile config "NoKubernetes-532729": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1025 22:41:03.665588  708411 config.go:182] Loaded profile config "cert-expiration-928371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:41:03.665744  708411 config.go:182] Loaded profile config "running-upgrade-587743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1025 22:41:03.665854  708411 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 22:41:03.708388  708411 out.go:177] * Using the kvm2 driver based on user configuration
	I1025 22:41:03.709636  708411 start.go:297] selected driver: kvm2
	I1025 22:41:03.709654  708411 start.go:901] validating driver "kvm2" against <nil>
	I1025 22:41:03.709668  708411 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:41:03.710453  708411 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:41:03.710557  708411 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 22:41:03.727940  708411 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 22:41:03.727987  708411 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 22:41:03.728262  708411 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 22:41:03.728299  708411 cni.go:84] Creating CNI manager for ""
	I1025 22:41:03.728367  708411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:41:03.728382  708411 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 22:41:03.728455  708411 start.go:340] cluster config:
	{Name:kubernetes-upgrade-234842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-234842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:41:03.728568  708411 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:41:03.730561  708411 out.go:177] * Starting "kubernetes-upgrade-234842" primary control-plane node in "kubernetes-upgrade-234842" cluster
	I1025 22:41:03.731868  708411 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1025 22:41:03.731900  708411 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1025 22:41:03.731908  708411 cache.go:56] Caching tarball of preloaded images
	I1025 22:41:03.731996  708411 preload.go:172] Found /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 22:41:03.732009  708411 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1025 22:41:03.732097  708411 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/config.json ...
	I1025 22:41:03.732115  708411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/config.json: {Name:mkbe59cb9a703f615715162043ecc229a9cded40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:41:03.732264  708411 start.go:360] acquireMachinesLock for kubernetes-upgrade-234842: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 22:41:09.401742  708411 start.go:364] duration metric: took 5.669431204s to acquireMachinesLock for "kubernetes-upgrade-234842"
	I1025 22:41:09.401818  708411 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-234842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-234842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:41:09.401931  708411 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 22:41:09.403912  708411 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 22:41:09.404118  708411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:41:09.404182  708411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:41:09.420913  708411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I1025 22:41:09.421381  708411 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:41:09.421983  708411 main.go:141] libmachine: Using API Version  1
	I1025 22:41:09.422011  708411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:41:09.422340  708411 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:41:09.422522  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetMachineName
	I1025 22:41:09.422649  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:41:09.422819  708411 start.go:159] libmachine.API.Create for "kubernetes-upgrade-234842" (driver="kvm2")
	I1025 22:41:09.422856  708411 client.go:168] LocalClient.Create starting
	I1025 22:41:09.422892  708411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem
	I1025 22:41:09.422943  708411 main.go:141] libmachine: Decoding PEM data...
	I1025 22:41:09.422967  708411 main.go:141] libmachine: Parsing certificate...
	I1025 22:41:09.423045  708411 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem
	I1025 22:41:09.423074  708411 main.go:141] libmachine: Decoding PEM data...
	I1025 22:41:09.423101  708411 main.go:141] libmachine: Parsing certificate...
	I1025 22:41:09.423125  708411 main.go:141] libmachine: Running pre-create checks...
	I1025 22:41:09.423138  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .PreCreateCheck
	I1025 22:41:09.423555  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetConfigRaw
	I1025 22:41:09.423978  708411 main.go:141] libmachine: Creating machine...
	I1025 22:41:09.423992  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .Create
	I1025 22:41:09.424130  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) creating KVM machine...
	I1025 22:41:09.424148  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) creating network...
	I1025 22:41:09.425297  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found existing default KVM network
	I1025 22:41:09.426779  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:09.426603  708547 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002235e0}
	I1025 22:41:09.426808  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | created network xml: 
	I1025 22:41:09.426821  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | <network>
	I1025 22:41:09.426834  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG |   <name>mk-kubernetes-upgrade-234842</name>
	I1025 22:41:09.426854  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG |   <dns enable='no'/>
	I1025 22:41:09.426865  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG |   
	I1025 22:41:09.426902  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1025 22:41:09.426925  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG |     <dhcp>
	I1025 22:41:09.426939  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1025 22:41:09.426960  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG |     </dhcp>
	I1025 22:41:09.426971  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG |   </ip>
	I1025 22:41:09.426993  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG |   
	I1025 22:41:09.427035  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | </network>
	I1025 22:41:09.427058  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | 
	I1025 22:41:09.432884  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | trying to create private KVM network mk-kubernetes-upgrade-234842 192.168.39.0/24...
	I1025 22:41:09.506079  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | private KVM network mk-kubernetes-upgrade-234842 192.168.39.0/24 created
	I1025 22:41:09.506107  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:09.506064  708547 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:41:09.506120  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) setting up store path in /home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842 ...
	I1025 22:41:09.506138  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) building disk image from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1025 22:41:09.506265  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Downloading /home/jenkins/minikube-integration/19758-661979/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1025 22:41:09.773206  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:09.773070  708547 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/id_rsa...
	I1025 22:41:09.879974  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:09.879855  708547 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/kubernetes-upgrade-234842.rawdisk...
	I1025 22:41:09.880016  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | Writing magic tar header
	I1025 22:41:09.880059  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | Writing SSH key tar header
	I1025 22:41:09.880099  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:09.879992  708547 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842 ...
	I1025 22:41:09.880138  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842
	I1025 22:41:09.880151  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines
	I1025 22:41:09.880164  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:41:09.880189  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842 (perms=drwx------)
	I1025 22:41:09.880225  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979
	I1025 22:41:09.880236  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines (perms=drwxr-xr-x)
	I1025 22:41:09.880254  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube (perms=drwxr-xr-x)
	I1025 22:41:09.880267  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) setting executable bit set on /home/jenkins/minikube-integration/19758-661979 (perms=drwxrwxr-x)
	I1025 22:41:09.880283  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 22:41:09.880295  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 22:41:09.880313  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) creating domain...
	I1025 22:41:09.880328  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1025 22:41:09.880342  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | checking permissions on dir: /home/jenkins
	I1025 22:41:09.880352  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | checking permissions on dir: /home
	I1025 22:41:09.880363  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | skipping /home - not owner
	I1025 22:41:09.881454  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) define libvirt domain using xml: 
	I1025 22:41:09.881475  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) <domain type='kvm'>
	I1025 22:41:09.881486  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   <name>kubernetes-upgrade-234842</name>
	I1025 22:41:09.881499  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   <memory unit='MiB'>2200</memory>
	I1025 22:41:09.881513  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   <vcpu>2</vcpu>
	I1025 22:41:09.881522  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   <features>
	I1025 22:41:09.881541  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <acpi/>
	I1025 22:41:09.881555  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <apic/>
	I1025 22:41:09.881574  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <pae/>
	I1025 22:41:09.881581  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     
	I1025 22:41:09.881589  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   </features>
	I1025 22:41:09.881594  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   <cpu mode='host-passthrough'>
	I1025 22:41:09.881599  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   
	I1025 22:41:09.881603  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   </cpu>
	I1025 22:41:09.881609  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   <os>
	I1025 22:41:09.881614  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <type>hvm</type>
	I1025 22:41:09.881662  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <boot dev='cdrom'/>
	I1025 22:41:09.881689  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <boot dev='hd'/>
	I1025 22:41:09.881700  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <bootmenu enable='no'/>
	I1025 22:41:09.881715  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   </os>
	I1025 22:41:09.881726  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   <devices>
	I1025 22:41:09.881734  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <disk type='file' device='cdrom'>
	I1025 22:41:09.881752  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/boot2docker.iso'/>
	I1025 22:41:09.881762  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <target dev='hdc' bus='scsi'/>
	I1025 22:41:09.881773  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <readonly/>
	I1025 22:41:09.881781  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     </disk>
	I1025 22:41:09.881791  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <disk type='file' device='disk'>
	I1025 22:41:09.881804  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1025 22:41:09.881829  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/kubernetes-upgrade-234842.rawdisk'/>
	I1025 22:41:09.881840  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <target dev='hda' bus='virtio'/>
	I1025 22:41:09.881850  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     </disk>
	I1025 22:41:09.881865  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <interface type='network'>
	I1025 22:41:09.881887  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <source network='mk-kubernetes-upgrade-234842'/>
	I1025 22:41:09.881897  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <model type='virtio'/>
	I1025 22:41:09.881906  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     </interface>
	I1025 22:41:09.881917  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <interface type='network'>
	I1025 22:41:09.881929  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <source network='default'/>
	I1025 22:41:09.881944  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <model type='virtio'/>
	I1025 22:41:09.881955  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     </interface>
	I1025 22:41:09.881965  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <serial type='pty'>
	I1025 22:41:09.881973  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <target port='0'/>
	I1025 22:41:09.881983  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     </serial>
	I1025 22:41:09.881993  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <console type='pty'>
	I1025 22:41:09.882004  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <target type='serial' port='0'/>
	I1025 22:41:09.882026  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     </console>
	I1025 22:41:09.882044  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     <rng model='virtio'>
	I1025 22:41:09.882058  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)       <backend model='random'>/dev/random</backend>
	I1025 22:41:09.882068  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     </rng>
	I1025 22:41:09.882074  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     
	I1025 22:41:09.882086  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)     
	I1025 22:41:09.882104  708411 main.go:141] libmachine: (kubernetes-upgrade-234842)   </devices>
	I1025 22:41:09.882119  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) </domain>
	I1025 22:41:09.882151  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) 
	I1025 22:41:09.890025  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:5d:e0:7d in network default
	I1025 22:41:09.890686  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) starting domain...
	I1025 22:41:09.890715  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:09.890725  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) ensuring networks are active...
	I1025 22:41:09.891589  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Ensuring network default is active
	I1025 22:41:09.891991  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Ensuring network mk-kubernetes-upgrade-234842 is active
	I1025 22:41:09.892757  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) getting domain XML...
	I1025 22:41:09.893643  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) creating domain...
	I1025 22:41:11.206816  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) waiting for IP...
	I1025 22:41:11.209189  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:11.209697  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:11.209754  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:11.209681  708547 retry.go:31] will retry after 253.554576ms: waiting for domain to come up
	I1025 22:41:11.465257  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:11.465876  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:11.465927  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:11.465845  708547 retry.go:31] will retry after 383.270713ms: waiting for domain to come up
	I1025 22:41:11.851222  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:11.851764  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:11.851798  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:11.851722  708547 retry.go:31] will retry after 354.845651ms: waiting for domain to come up
	I1025 22:41:12.551711  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:12.552089  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:12.552139  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:12.552064  708547 retry.go:31] will retry after 513.738626ms: waiting for domain to come up
	I1025 22:41:13.068040  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:13.068547  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:13.068604  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:13.068537  708547 retry.go:31] will retry after 466.687121ms: waiting for domain to come up
	I1025 22:41:13.537336  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:13.537825  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:13.537873  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:13.537805  708547 retry.go:31] will retry after 913.973161ms: waiting for domain to come up
	I1025 22:41:14.453713  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:14.454129  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:14.454162  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:14.454100  708547 retry.go:31] will retry after 1.084485454s: waiting for domain to come up
	I1025 22:41:15.540351  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:15.540801  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:15.540870  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:15.540775  708547 retry.go:31] will retry after 1.360276022s: waiting for domain to come up
	I1025 22:41:16.902836  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:16.903372  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:16.903401  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:16.903332  708547 retry.go:31] will retry after 1.756618841s: waiting for domain to come up
	I1025 22:41:18.662210  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:18.662748  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:18.662795  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:18.662717  708547 retry.go:31] will retry after 2.096240487s: waiting for domain to come up
	I1025 22:41:20.760826  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:20.761305  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:20.761364  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:20.761277  708547 retry.go:31] will retry after 1.77130014s: waiting for domain to come up
	I1025 22:41:22.535235  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:22.535725  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:22.535755  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:22.535679  708547 retry.go:31] will retry after 3.477400809s: waiting for domain to come up
	I1025 22:41:26.015178  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:26.015681  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:26.015707  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:26.015652  708547 retry.go:31] will retry after 2.729208162s: waiting for domain to come up
	I1025 22:41:28.748538  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:28.748870  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find current IP address of domain kubernetes-upgrade-234842 in network mk-kubernetes-upgrade-234842
	I1025 22:41:28.748900  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | I1025 22:41:28.748833  708547 retry.go:31] will retry after 4.434831901s: waiting for domain to come up
	I1025 22:41:33.186101  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.186567  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) found domain IP: 192.168.39.249
	I1025 22:41:33.186601  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has current primary IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.186610  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) reserving static IP address...
	I1025 22:41:33.186991  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-234842", mac: "52:54:00:6a:67:a6", ip: "192.168.39.249"} in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.261097  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | Getting to WaitForSSH function...
	I1025 22:41:33.261155  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) reserved static IP address 192.168.39.249 for domain kubernetes-upgrade-234842
	I1025 22:41:33.261170  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) waiting for SSH...
	I1025 22:41:33.263803  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.264333  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:33.264364  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.264518  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | Using SSH client type: external
	I1025 22:41:33.264544  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | Using SSH private key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/id_rsa (-rw-------)
	I1025 22:41:33.264578  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 22:41:33.264606  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | About to run SSH command:
	I1025 22:41:33.264619  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | exit 0
	I1025 22:41:33.385090  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | SSH cmd err, output: <nil>: 
	I1025 22:41:33.385316  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) KVM machine creation complete
	I1025 22:41:33.385689  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetConfigRaw
	I1025 22:41:33.386319  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:41:33.386532  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:41:33.386672  708411 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1025 22:41:33.386688  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetState
	I1025 22:41:33.387906  708411 main.go:141] libmachine: Detecting operating system of created instance...
	I1025 22:41:33.387920  708411 main.go:141] libmachine: Waiting for SSH to be available...
	I1025 22:41:33.387925  708411 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 22:41:33.387930  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:33.390119  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.390532  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:33.390563  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.390674  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:33.390853  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:33.390988  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:33.391152  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:33.391290  708411 main.go:141] libmachine: Using SSH client type: native
	I1025 22:41:33.391481  708411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1025 22:41:33.391492  708411 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 22:41:33.488295  708411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:41:33.488340  708411 main.go:141] libmachine: Detecting the provisioner...
	I1025 22:41:33.488351  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:33.491452  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.491857  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:33.491885  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.492124  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:33.492317  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:33.492478  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:33.492586  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:33.492755  708411 main.go:141] libmachine: Using SSH client type: native
	I1025 22:41:33.492939  708411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1025 22:41:33.492967  708411 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1025 22:41:33.589934  708411 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1025 22:41:33.590010  708411 main.go:141] libmachine: found compatible host: buildroot
	I1025 22:41:33.590025  708411 main.go:141] libmachine: Provisioning with buildroot...
	I1025 22:41:33.590041  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetMachineName
	I1025 22:41:33.590302  708411 buildroot.go:166] provisioning hostname "kubernetes-upgrade-234842"
	I1025 22:41:33.590331  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetMachineName
	I1025 22:41:33.590557  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:33.593244  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.593611  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:33.593653  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.593830  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:33.593996  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:33.594153  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:33.594247  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:33.594468  708411 main.go:141] libmachine: Using SSH client type: native
	I1025 22:41:33.594672  708411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1025 22:41:33.594689  708411 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-234842 && echo "kubernetes-upgrade-234842" | sudo tee /etc/hostname
	I1025 22:41:33.706977  708411 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-234842
	
	I1025 22:41:33.707007  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:33.709771  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.710105  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:33.710141  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.710312  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:33.710523  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:33.710710  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:33.710845  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:33.710990  708411 main.go:141] libmachine: Using SSH client type: native
	I1025 22:41:33.711215  708411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1025 22:41:33.711241  708411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-234842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-234842/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-234842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 22:41:33.818107  708411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:41:33.818142  708411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19758-661979/.minikube CaCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19758-661979/.minikube}
	I1025 22:41:33.818502  708411 buildroot.go:174] setting up certificates
	I1025 22:41:33.818518  708411 provision.go:84] configureAuth start
	I1025 22:41:33.818535  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetMachineName
	I1025 22:41:33.818840  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetIP
	I1025 22:41:33.821392  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.821762  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:33.821793  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.821876  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:33.823893  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.824267  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:33.824297  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:33.824452  708411 provision.go:143] copyHostCerts
	I1025 22:41:33.824529  708411 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem, removing ...
	I1025 22:41:33.824545  708411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem
	I1025 22:41:33.824605  708411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem (1082 bytes)
	I1025 22:41:33.824731  708411 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem, removing ...
	I1025 22:41:33.824744  708411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem
	I1025 22:41:33.824777  708411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem (1123 bytes)
	I1025 22:41:33.824872  708411 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem, removing ...
	I1025 22:41:33.824883  708411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem
	I1025 22:41:33.824906  708411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem (1675 bytes)
	I1025 22:41:33.825004  708411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-234842 san=[127.0.0.1 192.168.39.249 kubernetes-upgrade-234842 localhost minikube]
	I1025 22:41:34.187131  708411 provision.go:177] copyRemoteCerts
	I1025 22:41:34.187194  708411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 22:41:34.187221  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:34.189908  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.190288  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:34.190325  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.190550  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:34.190771  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:34.190933  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:34.191085  708411 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/id_rsa Username:docker}
	I1025 22:41:34.275216  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 22:41:34.300026  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 22:41:34.323503  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 22:41:34.347233  708411 provision.go:87] duration metric: took 528.699828ms to configureAuth
	I1025 22:41:34.347267  708411 buildroot.go:189] setting minikube options for container-runtime
	I1025 22:41:34.347495  708411 config.go:182] Loaded profile config "kubernetes-upgrade-234842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1025 22:41:34.347585  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:34.350229  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.350629  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:34.350657  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.350791  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:34.351008  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:34.351190  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:34.351320  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:34.351526  708411 main.go:141] libmachine: Using SSH client type: native
	I1025 22:41:34.351793  708411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1025 22:41:34.351822  708411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 22:41:34.572000  708411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 22:41:34.572029  708411 main.go:141] libmachine: Checking connection to Docker...
	I1025 22:41:34.572038  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetURL
	I1025 22:41:34.573331  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | using libvirt version 6000000
	I1025 22:41:34.575633  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.576010  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:34.576051  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.576208  708411 main.go:141] libmachine: Docker is up and running!
	I1025 22:41:34.576222  708411 main.go:141] libmachine: Reticulating splines...
	I1025 22:41:34.576231  708411 client.go:171] duration metric: took 25.153364926s to LocalClient.Create
	I1025 22:41:34.576260  708411 start.go:167] duration metric: took 25.153441645s to libmachine.API.Create "kubernetes-upgrade-234842"
	I1025 22:41:34.576273  708411 start.go:293] postStartSetup for "kubernetes-upgrade-234842" (driver="kvm2")
	I1025 22:41:34.576289  708411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 22:41:34.576311  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:41:34.576536  708411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 22:41:34.576562  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:34.579034  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.579392  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:34.579421  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.579595  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:34.579753  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:34.579876  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:34.579987  708411 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/id_rsa Username:docker}
	I1025 22:41:34.660565  708411 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 22:41:34.664711  708411 info.go:137] Remote host: Buildroot 2023.02.9
	I1025 22:41:34.664739  708411 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/addons for local assets ...
	I1025 22:41:34.664805  708411 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/files for local assets ...
	I1025 22:41:34.664886  708411 filesync.go:149] local asset: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem -> 6691772.pem in /etc/ssl/certs
	I1025 22:41:34.665016  708411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 22:41:34.674679  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:41:34.699708  708411 start.go:296] duration metric: took 123.416459ms for postStartSetup
	I1025 22:41:34.699767  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetConfigRaw
	I1025 22:41:34.700421  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetIP
	I1025 22:41:34.703188  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.703513  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:34.703546  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.703735  708411 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/config.json ...
	I1025 22:41:34.703911  708411 start.go:128] duration metric: took 25.301966912s to createHost
	I1025 22:41:34.703935  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:34.706022  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.706314  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:34.706343  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.706461  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:34.706633  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:34.706796  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:34.706927  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:34.707092  708411 main.go:141] libmachine: Using SSH client type: native
	I1025 22:41:34.707298  708411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1025 22:41:34.707312  708411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 22:41:34.805825  708411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729896094.783288421
	
	I1025 22:41:34.805853  708411 fix.go:216] guest clock: 1729896094.783288421
	I1025 22:41:34.805862  708411 fix.go:229] Guest: 2024-10-25 22:41:34.783288421 +0000 UTC Remote: 2024-10-25 22:41:34.703924166 +0000 UTC m=+31.094916792 (delta=79.364255ms)
	I1025 22:41:34.805885  708411 fix.go:200] guest clock delta is within tolerance: 79.364255ms
	I1025 22:41:34.805891  708411 start.go:83] releasing machines lock for "kubernetes-upgrade-234842", held for 25.404114773s
	I1025 22:41:34.805916  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:41:34.806195  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetIP
	I1025 22:41:34.809169  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.809503  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:34.809535  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.809664  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:41:34.810098  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:41:34.810303  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:41:34.810405  708411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 22:41:34.810469  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:34.810526  708411 ssh_runner.go:195] Run: cat /version.json
	I1025 22:41:34.810550  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:41:34.813171  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.813411  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.813523  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:34.813548  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.813678  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:34.813794  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:34.813822  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:34.813849  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:34.813964  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:41:34.814028  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:34.814084  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:41:34.814133  708411 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/id_rsa Username:docker}
	I1025 22:41:34.814441  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:41:34.814577  708411 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/id_rsa Username:docker}
	I1025 22:41:34.924434  708411 ssh_runner.go:195] Run: systemctl --version
	I1025 22:41:34.931067  708411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 22:41:35.086922  708411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 22:41:35.094306  708411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 22:41:35.094393  708411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 22:41:35.110334  708411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 22:41:35.110370  708411 start.go:495] detecting cgroup driver to use...
	I1025 22:41:35.110445  708411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 22:41:35.131911  708411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 22:41:35.148401  708411 docker.go:217] disabling cri-docker service (if available) ...
	I1025 22:41:35.148470  708411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 22:41:35.163707  708411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 22:41:35.179083  708411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 22:41:35.309220  708411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 22:41:35.473734  708411 docker.go:233] disabling docker service ...
	I1025 22:41:35.473808  708411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 22:41:35.488624  708411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 22:41:35.501676  708411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 22:41:35.628339  708411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 22:41:35.735298  708411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 22:41:35.748731  708411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:41:35.769011  708411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1025 22:41:35.769078  708411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:41:35.780000  708411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 22:41:35.780073  708411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:41:35.790107  708411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:41:35.800099  708411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:41:35.810473  708411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 22:41:35.821451  708411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 22:41:35.830421  708411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 22:41:35.830474  708411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 22:41:35.846001  708411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 22:41:35.856407  708411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:41:35.973954  708411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 22:41:36.072256  708411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 22:41:36.072339  708411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 22:41:36.078012  708411 start.go:563] Will wait 60s for crictl version
	I1025 22:41:36.078078  708411 ssh_runner.go:195] Run: which crictl
	I1025 22:41:36.085602  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 22:41:36.132024  708411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 22:41:36.132133  708411 ssh_runner.go:195] Run: crio --version
	I1025 22:41:36.161072  708411 ssh_runner.go:195] Run: crio --version
	I1025 22:41:36.190618  708411 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1025 22:41:36.191976  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetIP
	I1025 22:41:36.195107  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:36.195498  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:41:24 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:41:36.195533  708411 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:41:36.195776  708411 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 22:41:36.199816  708411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:41:36.212377  708411 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-234842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-234842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 22:41:36.212500  708411 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1025 22:41:36.212551  708411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:41:36.247601  708411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1025 22:41:36.247679  708411 ssh_runner.go:195] Run: which lz4
	I1025 22:41:36.251674  708411 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 22:41:36.257325  708411 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 22:41:36.257359  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1025 22:41:37.959601  708411 crio.go:462] duration metric: took 1.707951754s to copy over tarball
	I1025 22:41:37.959704  708411 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 22:41:40.497041  708411 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.537296591s)
	I1025 22:41:40.497076  708411 crio.go:469] duration metric: took 2.537443605s to extract the tarball
	I1025 22:41:40.497087  708411 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 22:41:40.541742  708411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:41:40.588805  708411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1025 22:41:40.588833  708411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 22:41:40.588907  708411 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:41:40.588967  708411 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:41:40.588933  708411 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1025 22:41:40.588967  708411 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1025 22:41:40.588917  708411 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:41:40.588970  708411 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:41:40.588979  708411 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1025 22:41:40.588983  708411 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:41:40.590648  708411 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1025 22:41:40.590676  708411 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:41:40.590674  708411 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:41:40.590654  708411 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1025 22:41:40.590716  708411 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:41:40.590730  708411 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1025 22:41:40.590650  708411 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:41:40.590651  708411 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:41:40.805743  708411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1025 22:41:40.816333  708411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:41:40.852832  708411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:41:40.856445  708411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1025 22:41:40.860764  708411 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1025 22:41:40.860812  708411 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1025 22:41:40.860864  708411 ssh_runner.go:195] Run: which crictl
	I1025 22:41:40.861975  708411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1025 22:41:40.868899  708411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1025 22:41:40.868972  708411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:41:40.869025  708411 ssh_runner.go:195] Run: which crictl
	I1025 22:41:40.881687  708411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:41:40.920470  708411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1025 22:41:40.920529  708411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:41:40.920587  708411 ssh_runner.go:195] Run: which crictl
	I1025 22:41:40.929867  708411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:41:40.969205  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 22:41:40.969348  708411 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1025 22:41:40.969396  708411 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1025 22:41:40.969445  708411 ssh_runner.go:195] Run: which crictl
	I1025 22:41:40.975922  708411 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1025 22:41:40.975964  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:41:40.975974  708411 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1025 22:41:40.976004  708411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1025 22:41:40.976046  708411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:41:40.976065  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:41:40.976089  708411 ssh_runner.go:195] Run: which crictl
	I1025 22:41:40.976015  708411 ssh_runner.go:195] Run: which crictl
	I1025 22:41:41.060831  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 22:41:41.060851  708411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1025 22:41:41.060892  708411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:41:41.060900  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1025 22:41:41.060928  708411 ssh_runner.go:195] Run: which crictl
	I1025 22:41:41.060988  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:41:41.061050  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1025 22:41:41.091012  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:41:41.091012  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:41:41.195793  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 22:41:41.195917  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1025 22:41:41.195817  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:41:41.195922  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:41:41.196157  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1025 22:41:41.222219  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:41:41.222220  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:41:41.357420  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:41:41.358587  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1025 22:41:41.358662  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1025 22:41:41.358710  708411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1025 22:41:41.358671  708411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1025 22:41:41.358771  708411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1025 22:41:41.358804  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:41:41.430666  708411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:41:41.446419  708411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1025 22:41:41.446453  708411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1025 22:41:41.446527  708411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1025 22:41:41.478104  708411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1025 22:41:41.730585  708411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:41:41.884686  708411 cache_images.go:92] duration metric: took 1.29583353s to LoadCachedImages
	W1025 22:41:41.884781  708411 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I1025 22:41:41.884797  708411 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.20.0 crio true true} ...
	I1025 22:41:41.884928  708411 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-234842 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-234842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 22:41:41.885033  708411 ssh_runner.go:195] Run: crio config
	I1025 22:41:41.933120  708411 cni.go:84] Creating CNI manager for ""
	I1025 22:41:41.933144  708411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:41:41.933157  708411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 22:41:41.933181  708411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-234842 NodeName:kubernetes-upgrade-234842 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 22:41:41.933334  708411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-234842"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 22:41:41.933414  708411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1025 22:41:41.943636  708411 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 22:41:41.943712  708411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 22:41:41.953435  708411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1025 22:41:41.969649  708411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 22:41:41.985989  708411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1025 22:41:42.002908  708411 ssh_runner.go:195] Run: grep 192.168.39.249	control-plane.minikube.internal$ /etc/hosts
	I1025 22:41:42.007327  708411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:41:42.020244  708411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:41:42.138172  708411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:41:42.156160  708411 certs.go:68] Setting up /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842 for IP: 192.168.39.249
	I1025 22:41:42.156187  708411 certs.go:194] generating shared ca certs ...
	I1025 22:41:42.156215  708411 certs.go:226] acquiring lock for ca certs: {Name:mkb3c0b02dcf77d22d8e250175d24731a46db4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:41:42.156397  708411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key
	I1025 22:41:42.156452  708411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key
	I1025 22:41:42.156465  708411 certs.go:256] generating profile certs ...
	I1025 22:41:42.156534  708411 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/client.key
	I1025 22:41:42.156553  708411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/client.crt with IP's: []
	I1025 22:41:42.364907  708411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/client.crt ...
	I1025 22:41:42.364940  708411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/client.crt: {Name:mk909e88131f914e024e210df5f6842fa85ff940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:41:42.365129  708411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/client.key ...
	I1025 22:41:42.365143  708411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/client.key: {Name:mke4a672d77fcc44920254fd103b7c76cd33c930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:41:42.365295  708411 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.key.4dcc6bf7
	I1025 22:41:42.365314  708411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.crt.4dcc6bf7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249]
	I1025 22:41:42.562909  708411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.crt.4dcc6bf7 ...
	I1025 22:41:42.562946  708411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.crt.4dcc6bf7: {Name:mk1d7b8041cbd4851b81a8468c8b6c2801389321 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:41:42.563148  708411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.key.4dcc6bf7 ...
	I1025 22:41:42.563167  708411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.key.4dcc6bf7: {Name:mk4bae528130192ef8b499a96860a3bee58eb908 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:41:42.563268  708411 certs.go:381] copying /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.crt.4dcc6bf7 -> /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.crt
	I1025 22:41:42.563363  708411 certs.go:385] copying /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.key.4dcc6bf7 -> /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.key
	I1025 22:41:42.563441  708411 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/proxy-client.key
	I1025 22:41:42.563460  708411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/proxy-client.crt with IP's: []
	I1025 22:41:42.797730  708411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/proxy-client.crt ...
	I1025 22:41:42.797780  708411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/proxy-client.crt: {Name:mk590895854ab3d720b7485df382d864567a8172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:41:42.798053  708411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/proxy-client.key ...
	I1025 22:41:42.798079  708411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/proxy-client.key: {Name:mk2804536622d9a9c1f0a892e6d353df64b108e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:41:42.798381  708411 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem (1338 bytes)
	W1025 22:41:42.798446  708411 certs.go:480] ignoring /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177_empty.pem, impossibly tiny 0 bytes
	I1025 22:41:42.798462  708411 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 22:41:42.798495  708411 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem (1082 bytes)
	I1025 22:41:42.798534  708411 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem (1123 bytes)
	I1025 22:41:42.798568  708411 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem (1675 bytes)
	I1025 22:41:42.798631  708411 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:41:42.799570  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 22:41:42.828048  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 22:41:42.855093  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 22:41:42.880083  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 22:41:42.905181  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1025 22:41:42.929293  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 22:41:42.957099  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 22:41:42.986184  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 22:41:43.019428  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /usr/share/ca-certificates/6691772.pem (1708 bytes)
	I1025 22:41:43.052795  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 22:41:43.082262  708411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem --> /usr/share/ca-certificates/669177.pem (1338 bytes)
	I1025 22:41:43.106804  708411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 22:41:43.128499  708411 ssh_runner.go:195] Run: openssl version
	I1025 22:41:43.136854  708411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669177.pem && ln -fs /usr/share/ca-certificates/669177.pem /etc/ssl/certs/669177.pem"
	I1025 22:41:43.151920  708411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669177.pem
	I1025 22:41:43.158004  708411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:46 /usr/share/ca-certificates/669177.pem
	I1025 22:41:43.158075  708411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669177.pem
	I1025 22:41:43.164221  708411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669177.pem /etc/ssl/certs/51391683.0"
	I1025 22:41:43.175431  708411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6691772.pem && ln -fs /usr/share/ca-certificates/6691772.pem /etc/ssl/certs/6691772.pem"
	I1025 22:41:43.186459  708411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6691772.pem
	I1025 22:41:43.191193  708411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:46 /usr/share/ca-certificates/6691772.pem
	I1025 22:41:43.191257  708411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6691772.pem
	I1025 22:41:43.197293  708411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6691772.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 22:41:43.210797  708411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 22:41:43.225731  708411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:41:43.230637  708411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:41:43.230702  708411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:41:43.236546  708411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 22:41:43.247231  708411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 22:41:43.251739  708411 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 22:41:43.251809  708411 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-234842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-234842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:41:43.251919  708411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 22:41:43.251975  708411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:41:43.294016  708411 cri.go:89] found id: ""
	I1025 22:41:43.294109  708411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 22:41:43.304678  708411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:41:43.315074  708411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:41:43.325090  708411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:41:43.325109  708411 kubeadm.go:157] found existing configuration files:
	
	I1025 22:41:43.325164  708411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:41:43.334651  708411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:41:43.334720  708411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:41:43.345004  708411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:41:43.354107  708411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:41:43.354237  708411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:41:43.364603  708411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:41:43.373686  708411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:41:43.373762  708411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:41:43.383612  708411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:41:43.392565  708411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:41:43.392639  708411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:41:43.405437  708411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 22:41:43.540430  708411 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 22:41:43.540694  708411 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 22:41:43.719798  708411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:41:43.719961  708411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:41:43.720094  708411 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:41:43.945087  708411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:41:44.040116  708411 out.go:235]   - Generating certificates and keys ...
	I1025 22:41:44.040269  708411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 22:41:44.040371  708411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 22:41:44.047004  708411 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 22:41:44.173958  708411 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1025 22:41:44.412935  708411 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1025 22:41:44.607246  708411 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1025 22:41:45.002370  708411 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1025 22:41:45.002558  708411 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-234842 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I1025 22:41:45.248275  708411 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1025 22:41:45.248524  708411 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-234842 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I1025 22:41:45.373129  708411 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 22:41:45.437195  708411 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 22:41:45.592607  708411 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1025 22:41:45.593312  708411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:41:46.031081  708411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:41:46.258430  708411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:41:46.411056  708411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:41:46.461779  708411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:41:46.484466  708411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:41:46.487343  708411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:41:46.487425  708411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 22:41:46.610923  708411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:41:46.612793  708411 out.go:235]   - Booting up control plane ...
	I1025 22:41:46.612901  708411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:41:46.621927  708411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:41:46.623415  708411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:41:46.624527  708411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:41:46.630038  708411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:42:26.623989  708411 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:42:26.624758  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:42:26.625031  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:42:31.625030  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:42:31.625312  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:42:41.624707  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:42:41.625017  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:43:01.624610  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:43:01.624883  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:43:41.626413  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:43:41.626739  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:43:41.626775  708411 kubeadm.go:310] 
	I1025 22:43:41.626848  708411 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 22:43:41.627180  708411 kubeadm.go:310] 		timed out waiting for the condition
	I1025 22:43:41.627209  708411 kubeadm.go:310] 
	I1025 22:43:41.627268  708411 kubeadm.go:310] 	This error is likely caused by:
	I1025 22:43:41.627324  708411 kubeadm.go:310] 		- The kubelet is not running
	I1025 22:43:41.627486  708411 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 22:43:41.627504  708411 kubeadm.go:310] 
	I1025 22:43:41.627668  708411 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 22:43:41.627734  708411 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 22:43:41.627784  708411 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 22:43:41.627795  708411 kubeadm.go:310] 
	I1025 22:43:41.627949  708411 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 22:43:41.628075  708411 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 22:43:41.628087  708411 kubeadm.go:310] 
	I1025 22:43:41.628234  708411 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 22:43:41.628358  708411 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 22:43:41.628464  708411 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 22:43:41.628573  708411 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 22:43:41.628584  708411 kubeadm.go:310] 
	I1025 22:43:41.631205  708411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:43:41.631311  708411 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 22:43:41.631419  708411 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1025 22:43:41.631597  708411 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-234842 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-234842 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-234842 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-234842 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 22:43:41.631650  708411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 22:43:42.839971  708411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.208283175s)
	I1025 22:43:42.840056  708411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:43:42.858036  708411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:43:42.872790  708411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:43:42.872815  708411 kubeadm.go:157] found existing configuration files:
	
	I1025 22:43:42.872860  708411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:43:42.883838  708411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:43:42.883896  708411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:43:42.894583  708411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:43:42.904705  708411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:43:42.904769  708411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:43:42.917077  708411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:43:42.927035  708411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:43:42.927098  708411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:43:42.938844  708411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:43:42.949768  708411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:43:42.949839  708411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:43:42.961075  708411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 22:43:43.043747  708411 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 22:43:43.043859  708411 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 22:43:43.209052  708411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:43:43.209228  708411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:43:43.209411  708411 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:43:43.399717  708411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:43:43.588066  708411 out.go:235]   - Generating certificates and keys ...
	I1025 22:43:43.588204  708411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 22:43:43.588299  708411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 22:43:43.588447  708411 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 22:43:43.588548  708411 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 22:43:43.588666  708411 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 22:43:43.588751  708411 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 22:43:43.588833  708411 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 22:43:43.588926  708411 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 22:43:43.589042  708411 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 22:43:43.589153  708411 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 22:43:43.589214  708411 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 22:43:43.589298  708411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:43:43.589381  708411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:43:43.852622  708411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:43:44.055946  708411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:43:44.212109  708411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:43:44.228619  708411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:43:44.231647  708411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:43:44.231702  708411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 22:43:44.381316  708411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:43:44.482262  708411 out.go:235]   - Booting up control plane ...
	I1025 22:43:44.482443  708411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:43:44.482562  708411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:43:44.482689  708411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:43:44.482816  708411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:43:44.483073  708411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:44:24.398823  708411 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:44:24.399221  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:44:24.399464  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:44:29.400117  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:44:29.405258  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:44:39.401353  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:44:39.401580  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:44:59.401110  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:44:59.401367  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:45:39.401562  708411 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:45:39.401836  708411 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:45:39.401851  708411 kubeadm.go:310] 
	I1025 22:45:39.401898  708411 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 22:45:39.401958  708411 kubeadm.go:310] 		timed out waiting for the condition
	I1025 22:45:39.401968  708411 kubeadm.go:310] 
	I1025 22:45:39.402014  708411 kubeadm.go:310] 	This error is likely caused by:
	I1025 22:45:39.402058  708411 kubeadm.go:310] 		- The kubelet is not running
	I1025 22:45:39.402186  708411 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 22:45:39.402195  708411 kubeadm.go:310] 
	I1025 22:45:39.402346  708411 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 22:45:39.402407  708411 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 22:45:39.402441  708411 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 22:45:39.402466  708411 kubeadm.go:310] 
	I1025 22:45:39.402634  708411 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 22:45:39.402754  708411 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 22:45:39.402765  708411 kubeadm.go:310] 
	I1025 22:45:39.402910  708411 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 22:45:39.403079  708411 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 22:45:39.403178  708411 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 22:45:39.403271  708411 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 22:45:39.403281  708411 kubeadm.go:310] 
	I1025 22:45:39.403641  708411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:45:39.403738  708411 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 22:45:39.403850  708411 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1025 22:45:39.403945  708411 kubeadm.go:394] duration metric: took 3m56.152140332s to StartCluster
	I1025 22:45:39.404007  708411 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:45:39.404066  708411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:45:39.448455  708411 cri.go:89] found id: ""
	I1025 22:45:39.448489  708411 logs.go:282] 0 containers: []
	W1025 22:45:39.448498  708411 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:45:39.448505  708411 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:45:39.448569  708411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:45:39.486175  708411 cri.go:89] found id: ""
	I1025 22:45:39.486209  708411 logs.go:282] 0 containers: []
	W1025 22:45:39.486220  708411 logs.go:284] No container was found matching "etcd"
	I1025 22:45:39.486228  708411 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:45:39.486298  708411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:45:39.523874  708411 cri.go:89] found id: ""
	I1025 22:45:39.523911  708411 logs.go:282] 0 containers: []
	W1025 22:45:39.523924  708411 logs.go:284] No container was found matching "coredns"
	I1025 22:45:39.523933  708411 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:45:39.524015  708411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:45:39.570080  708411 cri.go:89] found id: ""
	I1025 22:45:39.570112  708411 logs.go:282] 0 containers: []
	W1025 22:45:39.570122  708411 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:45:39.570130  708411 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:45:39.570197  708411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:45:39.606860  708411 cri.go:89] found id: ""
	I1025 22:45:39.606892  708411 logs.go:282] 0 containers: []
	W1025 22:45:39.606902  708411 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:45:39.606909  708411 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:45:39.606979  708411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:45:39.651739  708411 cri.go:89] found id: ""
	I1025 22:45:39.651771  708411 logs.go:282] 0 containers: []
	W1025 22:45:39.651782  708411 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:45:39.651791  708411 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:45:39.651844  708411 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:45:39.687580  708411 cri.go:89] found id: ""
	I1025 22:45:39.687628  708411 logs.go:282] 0 containers: []
	W1025 22:45:39.687641  708411 logs.go:284] No container was found matching "kindnet"
	I1025 22:45:39.687657  708411 logs.go:123] Gathering logs for kubelet ...
	I1025 22:45:39.687676  708411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:45:39.746293  708411 logs.go:123] Gathering logs for dmesg ...
	I1025 22:45:39.746329  708411 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:45:39.761971  708411 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:45:39.762001  708411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:45:39.898771  708411 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:45:39.898802  708411 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:45:39.898819  708411 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:45:40.012595  708411 logs.go:123] Gathering logs for container status ...
	I1025 22:45:40.012639  708411 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 22:45:40.057927  708411 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 22:45:40.057988  708411 out.go:270] * 
	* 
	W1025 22:45:40.058060  708411 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 22:45:40.058080  708411 out.go:270] * 
	* 
	W1025 22:45:40.059274  708411 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 22:45:40.062905  708411 out.go:201] 
	W1025 22:45:40.064540  708411 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 22:45:40.064599  708411 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 22:45:40.064636  708411 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 22:45:40.066221  708411 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-234842 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-234842
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-234842: (2.289149134s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-234842 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-234842 status --format={{.Host}}: exit status 7 (66.878188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-234842 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-234842 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.159246449s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-234842 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-234842 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-234842 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (87.804535ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-234842] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19758
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-234842
	    minikube start -p kubernetes-upgrade-234842 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2348422 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-234842 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-234842 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-234842 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.522959092s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-25 22:47:33.317460695 +0000 UTC m=+4343.438620211
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-234842 -n kubernetes-upgrade-234842
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-234842 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-234842 logs -n 25: (1.581215878s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147 sudo cat                | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147 sudo cat                | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147 sudo cat                | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-258147                         | enable-default-cni-258147 | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	| start   | -p embed-certs-601894                                | embed-certs-601894        | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-258147 pgrep -a                           | flannel-258147            | jenkins | v1.34.0 | 25 Oct 24 22:47 UTC | 25 Oct 24 22:47 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 22:47:15
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 22:47:15.315016  719897 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:47:15.315307  719897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:47:15.315317  719897 out.go:358] Setting ErrFile to fd 2...
	I1025 22:47:15.315321  719897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:47:15.315517  719897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:47:15.316117  719897 out.go:352] Setting JSON to false
	I1025 22:47:15.317535  719897 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":19779,"bootTime":1729876656,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:47:15.317608  719897 start.go:139] virtualization: kvm guest
	I1025 22:47:15.319593  719897 out.go:177] * [embed-certs-601894] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:47:15.321064  719897 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 22:47:15.321096  719897 notify.go:220] Checking for updates...
	I1025 22:47:15.322687  719897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:47:15.324371  719897 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:47:15.325628  719897 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:47:15.327210  719897 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:47:15.328460  719897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:47:15.330275  719897 config.go:182] Loaded profile config "bridge-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:47:15.330461  719897 config.go:182] Loaded profile config "flannel-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:47:15.330624  719897 config.go:182] Loaded profile config "kubernetes-upgrade-234842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:47:15.330750  719897 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 22:47:15.381043  719897 out.go:177] * Using the kvm2 driver based on user configuration
	I1025 22:47:15.382819  719897 start.go:297] selected driver: kvm2
	I1025 22:47:15.382843  719897 start.go:901] validating driver "kvm2" against <nil>
	I1025 22:47:15.382859  719897 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:47:15.383887  719897 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:47:15.384003  719897 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 22:47:15.407816  719897 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 22:47:15.407905  719897 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 22:47:15.408296  719897 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 22:47:15.408347  719897 cni.go:84] Creating CNI manager for ""
	I1025 22:47:15.408421  719897 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:47:15.408430  719897 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 22:47:15.408509  719897 start.go:340] cluster config:
	{Name:embed-certs-601894 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-601894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:47:15.408699  719897 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:47:15.410179  719897 out.go:177] * Starting "embed-certs-601894" primary control-plane node in "embed-certs-601894" cluster
	I1025 22:47:15.411314  719897 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 22:47:15.411363  719897 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1025 22:47:15.411376  719897 cache.go:56] Caching tarball of preloaded images
	I1025 22:47:15.411470  719897 preload.go:172] Found /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 22:47:15.411483  719897 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1025 22:47:15.411622  719897 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/embed-certs-601894/config.json ...
	I1025 22:47:15.411652  719897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/embed-certs-601894/config.json: {Name:mkc6efc0e5e98ea631ade75e59c46353226b6a56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:47:15.411859  719897 start.go:360] acquireMachinesLock for embed-certs-601894: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 22:47:15.411916  719897 start.go:364] duration metric: took 32.995µs to acquireMachinesLock for "embed-certs-601894"
	I1025 22:47:15.411944  719897 start.go:93] Provisioning new machine with config: &{Name:embed-certs-601894 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:embed-certs-601894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:47:15.412026  719897 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 22:47:15.208654  718021 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 22:47:15.226241  718021 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 22:47:15.247110  718021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:47:15.247196  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-258147 minikube.k8s.io/updated_at=2024_10_25T22_47_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=bridge-258147 minikube.k8s.io/primary=true
	I1025 22:47:15.247198  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:47:15.416490  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:47:15.464522  718021 ops.go:34] apiserver oom_adj: -16
	I1025 22:47:15.916974  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:47:16.416863  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:47:16.917463  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:47:17.416612  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:47:17.917617  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:47:18.417112  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:47:18.917545  718021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:47:19.005876  718021 kubeadm.go:1113] duration metric: took 3.758765016s to wait for elevateKubeSystemPrivileges
	I1025 22:47:19.005921  718021 kubeadm.go:394] duration metric: took 16.028727994s to StartCluster
	I1025 22:47:19.005947  718021 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:47:19.006048  718021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:47:19.007243  718021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:47:19.007497  718021 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.46 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:47:19.007529  718021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 22:47:19.007551  718021 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 22:47:19.007699  718021 addons.go:69] Setting storage-provisioner=true in profile "bridge-258147"
	I1025 22:47:19.007718  718021 addons.go:234] Setting addon storage-provisioner=true in "bridge-258147"
	I1025 22:47:19.007762  718021 host.go:66] Checking if "bridge-258147" exists ...
	I1025 22:47:19.007778  718021 config.go:182] Loaded profile config "bridge-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:47:19.007715  718021 addons.go:69] Setting default-storageclass=true in profile "bridge-258147"
	I1025 22:47:19.007827  718021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-258147"
	I1025 22:47:19.008281  718021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:19.008327  718021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:19.008369  718021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:19.008414  718021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:19.009338  718021 out.go:177] * Verifying Kubernetes components...
	I1025 22:47:19.010699  718021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:47:19.029585  718021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I1025 22:47:19.029794  718021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35017
	I1025 22:47:19.030257  718021 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:19.030498  718021 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:19.030979  718021 main.go:141] libmachine: Using API Version  1
	I1025 22:47:19.031003  718021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:19.031296  718021 main.go:141] libmachine: Using API Version  1
	I1025 22:47:19.031323  718021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:19.031390  718021 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:19.031847  718021 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:19.031946  718021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:19.031999  718021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:19.032357  718021 main.go:141] libmachine: (bridge-258147) Calling .GetState
	I1025 22:47:19.036423  718021 addons.go:234] Setting addon default-storageclass=true in "bridge-258147"
	I1025 22:47:19.036467  718021 host.go:66] Checking if "bridge-258147" exists ...
	I1025 22:47:19.036826  718021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:19.036857  718021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:19.048552  718021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38969
	I1025 22:47:19.049185  718021 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:19.049810  718021 main.go:141] libmachine: Using API Version  1
	I1025 22:47:19.049838  718021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:19.053372  718021 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:19.053622  718021 main.go:141] libmachine: (bridge-258147) Calling .GetState
	I1025 22:47:19.055994  718021 main.go:141] libmachine: (bridge-258147) Calling .DriverName
	I1025 22:47:19.058100  718021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:47:15.165064  716426 pod_ready.go:103] pod "coredns-7c65d6cfc9-9ch69" in "kube-system" namespace has status "Ready":"False"
	I1025 22:47:17.660016  716426 pod_ready.go:103] pod "coredns-7c65d6cfc9-9ch69" in "kube-system" namespace has status "Ready":"False"
	I1025 22:47:19.058257  718021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42943
	I1025 22:47:19.058693  718021 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:19.059369  718021 main.go:141] libmachine: Using API Version  1
	I1025 22:47:19.059390  718021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:19.059401  718021 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:47:19.059414  718021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:47:19.059430  718021 main.go:141] libmachine: (bridge-258147) Calling .GetSSHHostname
	I1025 22:47:19.059856  718021 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:19.060611  718021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:19.060664  718021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:19.062438  718021 main.go:141] libmachine: (bridge-258147) DBG | domain bridge-258147 has defined MAC address 52:54:00:7d:b6:d3 in network mk-bridge-258147
	I1025 22:47:19.062897  718021 main.go:141] libmachine: (bridge-258147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:d3", ip: ""} in network mk-bridge-258147: {Iface:virbr1 ExpiryTime:2024-10-25 23:46:47 +0000 UTC Type:0 Mac:52:54:00:7d:b6:d3 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:bridge-258147 Clientid:01:52:54:00:7d:b6:d3}
	I1025 22:47:19.062917  718021 main.go:141] libmachine: (bridge-258147) DBG | domain bridge-258147 has defined IP address 192.168.61.46 and MAC address 52:54:00:7d:b6:d3 in network mk-bridge-258147
	I1025 22:47:19.063083  718021 main.go:141] libmachine: (bridge-258147) Calling .GetSSHPort
	I1025 22:47:19.063257  718021 main.go:141] libmachine: (bridge-258147) Calling .GetSSHKeyPath
	I1025 22:47:19.063404  718021 main.go:141] libmachine: (bridge-258147) Calling .GetSSHUsername
	I1025 22:47:19.063535  718021 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/bridge-258147/id_rsa Username:docker}
	I1025 22:47:19.082721  718021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I1025 22:47:19.083240  718021 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:19.083809  718021 main.go:141] libmachine: Using API Version  1
	I1025 22:47:19.083823  718021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:19.084188  718021 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:19.084339  718021 main.go:141] libmachine: (bridge-258147) Calling .GetState
	I1025 22:47:19.086781  718021 main.go:141] libmachine: (bridge-258147) Calling .DriverName
	I1025 22:47:19.087028  718021 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:47:19.087042  718021 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:47:19.087057  718021 main.go:141] libmachine: (bridge-258147) Calling .GetSSHHostname
	I1025 22:47:19.090153  718021 main.go:141] libmachine: (bridge-258147) DBG | domain bridge-258147 has defined MAC address 52:54:00:7d:b6:d3 in network mk-bridge-258147
	I1025 22:47:19.090624  718021 main.go:141] libmachine: (bridge-258147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b6:d3", ip: ""} in network mk-bridge-258147: {Iface:virbr1 ExpiryTime:2024-10-25 23:46:47 +0000 UTC Type:0 Mac:52:54:00:7d:b6:d3 Iaid: IPaddr:192.168.61.46 Prefix:24 Hostname:bridge-258147 Clientid:01:52:54:00:7d:b6:d3}
	I1025 22:47:19.090663  718021 main.go:141] libmachine: (bridge-258147) DBG | domain bridge-258147 has defined IP address 192.168.61.46 and MAC address 52:54:00:7d:b6:d3 in network mk-bridge-258147
	I1025 22:47:19.090819  718021 main.go:141] libmachine: (bridge-258147) Calling .GetSSHPort
	I1025 22:47:19.090934  718021 main.go:141] libmachine: (bridge-258147) Calling .GetSSHKeyPath
	I1025 22:47:19.091009  718021 main.go:141] libmachine: (bridge-258147) Calling .GetSSHUsername
	I1025 22:47:19.091076  718021 sshutil.go:53] new ssh client: &{IP:192.168.61.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/bridge-258147/id_rsa Username:docker}
	I1025 22:47:19.191197  718021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 22:47:19.239760  718021 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:47:19.371080  718021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:47:19.476989  718021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:47:19.783352  718021 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1025 22:47:19.784227  718021 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:19.784252  718021 main.go:141] libmachine: (bridge-258147) Calling .Close
	I1025 22:47:19.784594  718021 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:19.784622  718021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:19.784633  718021 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:19.784642  718021 main.go:141] libmachine: (bridge-258147) Calling .Close
	I1025 22:47:19.785181  718021 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:19.785203  718021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:19.785433  718021 node_ready.go:35] waiting up to 15m0s for node "bridge-258147" to be "Ready" ...
	I1025 22:47:19.808273  718021 node_ready.go:49] node "bridge-258147" has status "Ready":"True"
	I1025 22:47:19.808296  718021 node_ready.go:38] duration metric: took 22.8364ms for node "bridge-258147" to be "Ready" ...
	I1025 22:47:19.808305  718021 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:47:19.817199  718021 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:19.817231  718021 main.go:141] libmachine: (bridge-258147) Calling .Close
	I1025 22:47:19.817552  718021 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:19.817572  718021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:19.817610  718021 main.go:141] libmachine: (bridge-258147) DBG | Closing plugin on server side
	I1025 22:47:19.836122  718021 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-2jxmm" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:20.066875  718021 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:20.066914  718021 main.go:141] libmachine: (bridge-258147) Calling .Close
	I1025 22:47:20.067231  718021 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:20.067252  718021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:20.067261  718021 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:20.067269  718021 main.go:141] libmachine: (bridge-258147) Calling .Close
	I1025 22:47:20.069288  718021 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:20.069313  718021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:20.069289  718021 main.go:141] libmachine: (bridge-258147) DBG | Closing plugin on server side
	I1025 22:47:20.071343  718021 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1025 22:47:15.413601  719897 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 22:47:15.413785  719897 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:15.413851  719897 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:15.435719  719897 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35807
	I1025 22:47:15.436270  719897 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:15.437082  719897 main.go:141] libmachine: Using API Version  1
	I1025 22:47:15.437108  719897 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:15.437519  719897 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:15.437899  719897 main.go:141] libmachine: (embed-certs-601894) Calling .GetMachineName
	I1025 22:47:15.438083  719897 main.go:141] libmachine: (embed-certs-601894) Calling .DriverName
	I1025 22:47:15.438243  719897 start.go:159] libmachine.API.Create for "embed-certs-601894" (driver="kvm2")
	I1025 22:47:15.438279  719897 client.go:168] LocalClient.Create starting
	I1025 22:47:15.438322  719897 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem
	I1025 22:47:15.438367  719897 main.go:141] libmachine: Decoding PEM data...
	I1025 22:47:15.438403  719897 main.go:141] libmachine: Parsing certificate...
	I1025 22:47:15.438484  719897 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem
	I1025 22:47:15.438511  719897 main.go:141] libmachine: Decoding PEM data...
	I1025 22:47:15.438527  719897 main.go:141] libmachine: Parsing certificate...
	I1025 22:47:15.438559  719897 main.go:141] libmachine: Running pre-create checks...
	I1025 22:47:15.438571  719897 main.go:141] libmachine: (embed-certs-601894) Calling .PreCreateCheck
	I1025 22:47:15.439020  719897 main.go:141] libmachine: (embed-certs-601894) Calling .GetConfigRaw
	I1025 22:47:15.439527  719897 main.go:141] libmachine: Creating machine...
	I1025 22:47:15.439548  719897 main.go:141] libmachine: (embed-certs-601894) Calling .Create
	I1025 22:47:15.439712  719897 main.go:141] libmachine: (embed-certs-601894) creating KVM machine...
	I1025 22:47:15.439792  719897 main.go:141] libmachine: (embed-certs-601894) creating network...
	I1025 22:47:15.441310  719897 main.go:141] libmachine: (embed-certs-601894) DBG | found existing default KVM network
	I1025 22:47:15.442679  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:15.442492  719919 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:49:c2:10} reservation:<nil>}
	I1025 22:47:15.443730  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:15.443617  719919 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a2:7d:60} reservation:<nil>}
	I1025 22:47:15.445049  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:15.444913  719919 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:4b:77} reservation:<nil>}
	I1025 22:47:15.446203  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:15.446112  719919 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00038b160}
	I1025 22:47:15.446227  719897 main.go:141] libmachine: (embed-certs-601894) DBG | created network xml: 
	I1025 22:47:15.446238  719897 main.go:141] libmachine: (embed-certs-601894) DBG | <network>
	I1025 22:47:15.446246  719897 main.go:141] libmachine: (embed-certs-601894) DBG |   <name>mk-embed-certs-601894</name>
	I1025 22:47:15.446255  719897 main.go:141] libmachine: (embed-certs-601894) DBG |   <dns enable='no'/>
	I1025 22:47:15.446262  719897 main.go:141] libmachine: (embed-certs-601894) DBG |   
	I1025 22:47:15.446292  719897 main.go:141] libmachine: (embed-certs-601894) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1025 22:47:15.446299  719897 main.go:141] libmachine: (embed-certs-601894) DBG |     <dhcp>
	I1025 22:47:15.446306  719897 main.go:141] libmachine: (embed-certs-601894) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1025 22:47:15.446319  719897 main.go:141] libmachine: (embed-certs-601894) DBG |     </dhcp>
	I1025 22:47:15.446390  719897 main.go:141] libmachine: (embed-certs-601894) DBG |   </ip>
	I1025 22:47:15.446418  719897 main.go:141] libmachine: (embed-certs-601894) DBG |   
	I1025 22:47:15.446429  719897 main.go:141] libmachine: (embed-certs-601894) DBG | </network>
	I1025 22:47:15.446455  719897 main.go:141] libmachine: (embed-certs-601894) DBG | 
	I1025 22:47:15.451932  719897 main.go:141] libmachine: (embed-certs-601894) DBG | trying to create private KVM network mk-embed-certs-601894 192.168.72.0/24...
	I1025 22:47:15.543708  719897 main.go:141] libmachine: (embed-certs-601894) DBG | private KVM network mk-embed-certs-601894 192.168.72.0/24 created
	I1025 22:47:15.543886  719897 main.go:141] libmachine: (embed-certs-601894) setting up store path in /home/jenkins/minikube-integration/19758-661979/.minikube/machines/embed-certs-601894 ...
	I1025 22:47:15.543965  719897 main.go:141] libmachine: (embed-certs-601894) building disk image from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1025 22:47:15.544000  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:15.543939  719919 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:47:15.544166  719897 main.go:141] libmachine: (embed-certs-601894) Downloading /home/jenkins/minikube-integration/19758-661979/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1025 22:47:15.871960  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:15.871784  719919 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/embed-certs-601894/id_rsa...
	I1025 22:47:15.960708  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:15.960523  719919 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/embed-certs-601894/embed-certs-601894.rawdisk...
	I1025 22:47:15.960742  719897 main.go:141] libmachine: (embed-certs-601894) DBG | Writing magic tar header
	I1025 22:47:15.960761  719897 main.go:141] libmachine: (embed-certs-601894) DBG | Writing SSH key tar header
	I1025 22:47:15.960774  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:15.960710  719919 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/embed-certs-601894 ...
	I1025 22:47:15.960912  719897 main.go:141] libmachine: (embed-certs-601894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/embed-certs-601894
	I1025 22:47:15.960938  719897 main.go:141] libmachine: (embed-certs-601894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines
	I1025 22:47:15.960962  719897 main.go:141] libmachine: (embed-certs-601894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:47:15.960971  719897 main.go:141] libmachine: (embed-certs-601894) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979
	I1025 22:47:15.960981  719897 main.go:141] libmachine: (embed-certs-601894) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1025 22:47:15.960989  719897 main.go:141] libmachine: (embed-certs-601894) DBG | checking permissions on dir: /home/jenkins
	I1025 22:47:15.961005  719897 main.go:141] libmachine: (embed-certs-601894) DBG | checking permissions on dir: /home
	I1025 22:47:15.961013  719897 main.go:141] libmachine: (embed-certs-601894) DBG | skipping /home - not owner
	I1025 22:47:15.961047  719897 main.go:141] libmachine: (embed-certs-601894) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/embed-certs-601894 (perms=drwx------)
	I1025 22:47:15.961075  719897 main.go:141] libmachine: (embed-certs-601894) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines (perms=drwxr-xr-x)
	I1025 22:47:15.961099  719897 main.go:141] libmachine: (embed-certs-601894) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube (perms=drwxr-xr-x)
	I1025 22:47:15.961112  719897 main.go:141] libmachine: (embed-certs-601894) setting executable bit set on /home/jenkins/minikube-integration/19758-661979 (perms=drwxrwxr-x)
	I1025 22:47:15.961124  719897 main.go:141] libmachine: (embed-certs-601894) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 22:47:15.961135  719897 main.go:141] libmachine: (embed-certs-601894) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 22:47:15.961146  719897 main.go:141] libmachine: (embed-certs-601894) creating domain...
	I1025 22:47:15.962665  719897 main.go:141] libmachine: (embed-certs-601894) define libvirt domain using xml: 
	I1025 22:47:15.962683  719897 main.go:141] libmachine: (embed-certs-601894) <domain type='kvm'>
	I1025 22:47:15.962692  719897 main.go:141] libmachine: (embed-certs-601894)   <name>embed-certs-601894</name>
	I1025 22:47:15.962699  719897 main.go:141] libmachine: (embed-certs-601894)   <memory unit='MiB'>2200</memory>
	I1025 22:47:15.962706  719897 main.go:141] libmachine: (embed-certs-601894)   <vcpu>2</vcpu>
	I1025 22:47:15.962713  719897 main.go:141] libmachine: (embed-certs-601894)   <features>
	I1025 22:47:15.962721  719897 main.go:141] libmachine: (embed-certs-601894)     <acpi/>
	I1025 22:47:15.962726  719897 main.go:141] libmachine: (embed-certs-601894)     <apic/>
	I1025 22:47:15.962735  719897 main.go:141] libmachine: (embed-certs-601894)     <pae/>
	I1025 22:47:15.962740  719897 main.go:141] libmachine: (embed-certs-601894)     
	I1025 22:47:15.962748  719897 main.go:141] libmachine: (embed-certs-601894)   </features>
	I1025 22:47:15.962772  719897 main.go:141] libmachine: (embed-certs-601894)   <cpu mode='host-passthrough'>
	I1025 22:47:15.962781  719897 main.go:141] libmachine: (embed-certs-601894)   
	I1025 22:47:15.962786  719897 main.go:141] libmachine: (embed-certs-601894)   </cpu>
	I1025 22:47:15.962792  719897 main.go:141] libmachine: (embed-certs-601894)   <os>
	I1025 22:47:15.962798  719897 main.go:141] libmachine: (embed-certs-601894)     <type>hvm</type>
	I1025 22:47:15.962805  719897 main.go:141] libmachine: (embed-certs-601894)     <boot dev='cdrom'/>
	I1025 22:47:15.962811  719897 main.go:141] libmachine: (embed-certs-601894)     <boot dev='hd'/>
	I1025 22:47:15.962818  719897 main.go:141] libmachine: (embed-certs-601894)     <bootmenu enable='no'/>
	I1025 22:47:15.962825  719897 main.go:141] libmachine: (embed-certs-601894)   </os>
	I1025 22:47:15.962832  719897 main.go:141] libmachine: (embed-certs-601894)   <devices>
	I1025 22:47:15.962838  719897 main.go:141] libmachine: (embed-certs-601894)     <disk type='file' device='cdrom'>
	I1025 22:47:15.962851  719897 main.go:141] libmachine: (embed-certs-601894)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/embed-certs-601894/boot2docker.iso'/>
	I1025 22:47:15.962858  719897 main.go:141] libmachine: (embed-certs-601894)       <target dev='hdc' bus='scsi'/>
	I1025 22:47:15.962865  719897 main.go:141] libmachine: (embed-certs-601894)       <readonly/>
	I1025 22:47:15.962871  719897 main.go:141] libmachine: (embed-certs-601894)     </disk>
	I1025 22:47:15.962880  719897 main.go:141] libmachine: (embed-certs-601894)     <disk type='file' device='disk'>
	I1025 22:47:15.962887  719897 main.go:141] libmachine: (embed-certs-601894)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1025 22:47:15.962899  719897 main.go:141] libmachine: (embed-certs-601894)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/embed-certs-601894/embed-certs-601894.rawdisk'/>
	I1025 22:47:15.962906  719897 main.go:141] libmachine: (embed-certs-601894)       <target dev='hda' bus='virtio'/>
	I1025 22:47:15.962913  719897 main.go:141] libmachine: (embed-certs-601894)     </disk>
	I1025 22:47:15.962919  719897 main.go:141] libmachine: (embed-certs-601894)     <interface type='network'>
	I1025 22:47:15.962928  719897 main.go:141] libmachine: (embed-certs-601894)       <source network='mk-embed-certs-601894'/>
	I1025 22:47:15.962934  719897 main.go:141] libmachine: (embed-certs-601894)       <model type='virtio'/>
	I1025 22:47:15.962941  719897 main.go:141] libmachine: (embed-certs-601894)     </interface>
	I1025 22:47:15.962948  719897 main.go:141] libmachine: (embed-certs-601894)     <interface type='network'>
	I1025 22:47:15.962956  719897 main.go:141] libmachine: (embed-certs-601894)       <source network='default'/>
	I1025 22:47:15.962962  719897 main.go:141] libmachine: (embed-certs-601894)       <model type='virtio'/>
	I1025 22:47:15.962969  719897 main.go:141] libmachine: (embed-certs-601894)     </interface>
	I1025 22:47:15.962975  719897 main.go:141] libmachine: (embed-certs-601894)     <serial type='pty'>
	I1025 22:47:15.962985  719897 main.go:141] libmachine: (embed-certs-601894)       <target port='0'/>
	I1025 22:47:15.962991  719897 main.go:141] libmachine: (embed-certs-601894)     </serial>
	I1025 22:47:15.962998  719897 main.go:141] libmachine: (embed-certs-601894)     <console type='pty'>
	I1025 22:47:15.963005  719897 main.go:141] libmachine: (embed-certs-601894)       <target type='serial' port='0'/>
	I1025 22:47:15.963022  719897 main.go:141] libmachine: (embed-certs-601894)     </console>
	I1025 22:47:15.963029  719897 main.go:141] libmachine: (embed-certs-601894)     <rng model='virtio'>
	I1025 22:47:15.963037  719897 main.go:141] libmachine: (embed-certs-601894)       <backend model='random'>/dev/random</backend>
	I1025 22:47:15.963043  719897 main.go:141] libmachine: (embed-certs-601894)     </rng>
	I1025 22:47:15.963049  719897 main.go:141] libmachine: (embed-certs-601894)     
	I1025 22:47:15.963054  719897 main.go:141] libmachine: (embed-certs-601894)     
	I1025 22:47:15.963061  719897 main.go:141] libmachine: (embed-certs-601894)   </devices>
	I1025 22:47:15.963066  719897 main.go:141] libmachine: (embed-certs-601894) </domain>
	I1025 22:47:15.963077  719897 main.go:141] libmachine: (embed-certs-601894) 
	I1025 22:47:15.968469  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:1d:3e:e4 in network default
	I1025 22:47:15.969334  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:15.969373  719897 main.go:141] libmachine: (embed-certs-601894) starting domain...
	I1025 22:47:15.969387  719897 main.go:141] libmachine: (embed-certs-601894) ensuring networks are active...
	I1025 22:47:15.970241  719897 main.go:141] libmachine: (embed-certs-601894) Ensuring network default is active
	I1025 22:47:15.970738  719897 main.go:141] libmachine: (embed-certs-601894) Ensuring network mk-embed-certs-601894 is active
	I1025 22:47:15.971371  719897 main.go:141] libmachine: (embed-certs-601894) getting domain XML...
	I1025 22:47:15.972400  719897 main.go:141] libmachine: (embed-certs-601894) creating domain...
	I1025 22:47:17.288104  719897 main.go:141] libmachine: (embed-certs-601894) waiting for IP...
	I1025 22:47:17.289011  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:17.289553  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:17.289627  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:17.289549  719919 retry.go:31] will retry after 307.302844ms: waiting for domain to come up
	I1025 22:47:17.598184  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:17.598744  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:17.598772  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:17.598703  719919 retry.go:31] will retry after 354.231286ms: waiting for domain to come up
	I1025 22:47:17.954205  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:17.954824  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:17.954854  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:17.954755  719919 retry.go:31] will retry after 410.952659ms: waiting for domain to come up
	I1025 22:47:18.367236  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:18.367861  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:18.367882  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:18.367819  719919 retry.go:31] will retry after 511.648166ms: waiting for domain to come up
	I1025 22:47:18.881612  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:18.882237  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:18.882269  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:18.882203  719919 retry.go:31] will retry after 548.025336ms: waiting for domain to come up
	I1025 22:47:19.431800  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:19.432455  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:19.432478  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:19.432429  719919 retry.go:31] will retry after 591.476861ms: waiting for domain to come up
	I1025 22:47:20.025878  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:20.026449  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:20.026490  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:20.026421  719919 retry.go:31] will retry after 1.178338243s: waiting for domain to come up
	I1025 22:47:20.072811  718021 addons.go:510] duration metric: took 1.065262203s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1025 22:47:20.289210  718021 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-258147" context rescaled to 1 replicas
	I1025 22:47:19.661621  716426 pod_ready.go:103] pod "coredns-7c65d6cfc9-9ch69" in "kube-system" namespace has status "Ready":"False"
	I1025 22:47:21.660131  716426 pod_ready.go:93] pod "coredns-7c65d6cfc9-9ch69" in "kube-system" namespace has status "Ready":"True"
	I1025 22:47:21.660161  716426 pod_ready.go:82] duration metric: took 17.006917919s for pod "coredns-7c65d6cfc9-9ch69" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:21.660182  716426 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-258147" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:21.665534  716426 pod_ready.go:93] pod "etcd-flannel-258147" in "kube-system" namespace has status "Ready":"True"
	I1025 22:47:21.665558  716426 pod_ready.go:82] duration metric: took 5.366794ms for pod "etcd-flannel-258147" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:21.665569  716426 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-258147" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:21.670727  716426 pod_ready.go:93] pod "kube-apiserver-flannel-258147" in "kube-system" namespace has status "Ready":"True"
	I1025 22:47:21.670749  716426 pod_ready.go:82] duration metric: took 5.172166ms for pod "kube-apiserver-flannel-258147" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:21.670758  716426 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-258147" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:21.675313  716426 pod_ready.go:93] pod "kube-controller-manager-flannel-258147" in "kube-system" namespace has status "Ready":"True"
	I1025 22:47:21.675340  716426 pod_ready.go:82] duration metric: took 4.573657ms for pod "kube-controller-manager-flannel-258147" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:21.675352  716426 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-qw68p" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:21.681581  716426 pod_ready.go:93] pod "kube-proxy-qw68p" in "kube-system" namespace has status "Ready":"True"
	I1025 22:47:21.681600  716426 pod_ready.go:82] duration metric: took 6.241095ms for pod "kube-proxy-qw68p" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:21.681610  716426 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-258147" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:22.057581  716426 pod_ready.go:93] pod "kube-scheduler-flannel-258147" in "kube-system" namespace has status "Ready":"True"
	I1025 22:47:22.057604  716426 pod_ready.go:82] duration metric: took 375.98587ms for pod "kube-scheduler-flannel-258147" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:22.057615  716426 pod_ready.go:39] duration metric: took 17.436964098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:47:22.057632  716426 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:47:22.057693  716426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:47:22.078999  716426 api_server.go:72] duration metric: took 28.197281887s to wait for apiserver process to appear ...
	I1025 22:47:22.079031  716426 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:47:22.079054  716426 api_server.go:253] Checking apiserver healthz at https://192.168.50.217:8443/healthz ...
	I1025 22:47:22.084650  716426 api_server.go:279] https://192.168.50.217:8443/healthz returned 200:
	ok
	I1025 22:47:22.085721  716426 api_server.go:141] control plane version: v1.31.1
	I1025 22:47:22.085744  716426 api_server.go:131] duration metric: took 6.706849ms to wait for apiserver health ...
	I1025 22:47:22.085751  716426 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:47:22.260157  716426 system_pods.go:59] 7 kube-system pods found
	I1025 22:47:22.260198  716426 system_pods.go:61] "coredns-7c65d6cfc9-9ch69" [13a0f23b-eff4-465f-9b0f-7828864c2645] Running
	I1025 22:47:22.260206  716426 system_pods.go:61] "etcd-flannel-258147" [0900ef87-0562-44fd-a1b7-4e4667d63d90] Running
	I1025 22:47:22.260210  716426 system_pods.go:61] "kube-apiserver-flannel-258147" [75f6771d-2165-4615-b896-b2cce229e79e] Running
	I1025 22:47:22.260214  716426 system_pods.go:61] "kube-controller-manager-flannel-258147" [7bbe20df-3c9b-46f8-b62c-5255f1fa1d85] Running
	I1025 22:47:22.260217  716426 system_pods.go:61] "kube-proxy-qw68p" [eef1a4f8-305e-4563-a727-635d182f3010] Running
	I1025 22:47:22.260220  716426 system_pods.go:61] "kube-scheduler-flannel-258147" [f7508aef-a8db-4d2e-9f62-4e2a661cf560] Running
	I1025 22:47:22.260223  716426 system_pods.go:61] "storage-provisioner" [a63bffd6-3653-4a77-9cec-0391c97bd05b] Running
	I1025 22:47:22.260229  716426 system_pods.go:74] duration metric: took 174.471934ms to wait for pod list to return data ...
	I1025 22:47:22.260237  716426 default_sa.go:34] waiting for default service account to be created ...
	I1025 22:47:22.457597  716426 default_sa.go:45] found service account: "default"
	I1025 22:47:22.457639  716426 default_sa.go:55] duration metric: took 197.394065ms for default service account to be created ...
	I1025 22:47:22.457651  716426 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 22:47:22.660570  716426 system_pods.go:86] 7 kube-system pods found
	I1025 22:47:22.660608  716426 system_pods.go:89] "coredns-7c65d6cfc9-9ch69" [13a0f23b-eff4-465f-9b0f-7828864c2645] Running
	I1025 22:47:22.660618  716426 system_pods.go:89] "etcd-flannel-258147" [0900ef87-0562-44fd-a1b7-4e4667d63d90] Running
	I1025 22:47:22.660624  716426 system_pods.go:89] "kube-apiserver-flannel-258147" [75f6771d-2165-4615-b896-b2cce229e79e] Running
	I1025 22:47:22.660630  716426 system_pods.go:89] "kube-controller-manager-flannel-258147" [7bbe20df-3c9b-46f8-b62c-5255f1fa1d85] Running
	I1025 22:47:22.660636  716426 system_pods.go:89] "kube-proxy-qw68p" [eef1a4f8-305e-4563-a727-635d182f3010] Running
	I1025 22:47:22.660641  716426 system_pods.go:89] "kube-scheduler-flannel-258147" [f7508aef-a8db-4d2e-9f62-4e2a661cf560] Running
	I1025 22:47:22.660646  716426 system_pods.go:89] "storage-provisioner" [a63bffd6-3653-4a77-9cec-0391c97bd05b] Running
	I1025 22:47:22.660655  716426 system_pods.go:126] duration metric: took 202.996933ms to wait for k8s-apps to be running ...
	I1025 22:47:22.660668  716426 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 22:47:22.660731  716426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:47:22.676106  716426 system_svc.go:56] duration metric: took 15.427175ms WaitForService to wait for kubelet
	I1025 22:47:22.676138  716426 kubeadm.go:582] duration metric: took 28.794429972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 22:47:22.676162  716426 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:47:22.858353  716426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:47:22.858389  716426 node_conditions.go:123] node cpu capacity is 2
	I1025 22:47:22.858405  716426 node_conditions.go:105] duration metric: took 182.235533ms to run NodePressure ...
	I1025 22:47:22.858420  716426 start.go:241] waiting for startup goroutines ...
	I1025 22:47:22.858430  716426 start.go:246] waiting for cluster config update ...
	I1025 22:47:22.858447  716426 start.go:255] writing updated cluster config ...
	I1025 22:47:22.858763  716426 ssh_runner.go:195] Run: rm -f paused
	I1025 22:47:22.911742  716426 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 22:47:22.913427  716426 out.go:177] * Done! kubectl is now configured to use "flannel-258147" cluster and "default" namespace by default
	I1025 22:47:21.206175  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:21.206765  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:21.206792  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:21.206759  719919 retry.go:31] will retry after 1.090011788s: waiting for domain to come up
	I1025 22:47:22.298424  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:22.298964  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:22.299010  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:22.298948  719919 retry.go:31] will retry after 1.75160372s: waiting for domain to come up
	I1025 22:47:24.051660  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:24.052126  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:24.052160  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:24.052058  719919 retry.go:31] will retry after 1.624023581s: waiting for domain to come up
	I1025 22:47:21.839470  718021 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-2jxmm" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2jxmm" not found
	I1025 22:47:21.839500  718021 pod_ready.go:82] duration metric: took 2.003343548s for pod "coredns-7c65d6cfc9-2jxmm" in "kube-system" namespace to be "Ready" ...
	E1025 22:47:21.839514  718021 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-2jxmm" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2jxmm" not found
	I1025 22:47:21.839523  718021 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-2llzq" in "kube-system" namespace to be "Ready" ...
	I1025 22:47:23.851790  718021 pod_ready.go:103] pod "coredns-7c65d6cfc9-2llzq" in "kube-system" namespace has status "Ready":"False"
	I1025 22:47:24.805437  718115 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 b9b1f91e4d882b8045aea2fe177130730eb7ef8d0f192482a6e19eeaf6889164 eb231f6bc57f2400df2646c7e3a8cc6dca9cf64d7cda1de46906f5e16f33411c b6aeb008ac0899969dd86e7e4427aedc845e1220d3c223f7b43bd249f85cb01d e903173ead59b483fd49bc261ea04684d88bdf9cc374e09f820bac87e07595d5 6ee655c2b95666a0cb6d756afb5bed8ac8ec593e912aba8f396ee937c84bfe66 055a7d1fd00e56803e6a94ba810931234ffc39f241f2c00f885ba4e26389d291 d3c1bb1bc0d6ee961b8d838fe8ad84e0a4bfbae821d1fa974f107b70899928a9 824c94f7450fc640e01f87f127b44da67da738649a10652ca9243c5993becd3c f756a6af6c3a38f836b4dbae08e65343de8b166745b15c692df252e9f63c2c1b 52e048a392296ac9c98d1b948b7a4625acf1a6d6963d7adcfc63ebeae0864708 68f7342f168e377b0f925e1e3fb3bdb7ad56128136f5420ee9b26bad5b4750d3 adb6019d5aedf238516350b052c68f892ba50732416d0493c740f6d5da445b66 2d3bc495b7825941d3c7e8141f5b3caf68b3026b31d1697a7686f01c8473b510 c609a55d6d5f0eacf3263f45f5dfb6c629bf7b0e8bcc788be1abb00ca7f89917: (16.8
42861871s)
	W1025 22:47:24.805546  718115 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 b9b1f91e4d882b8045aea2fe177130730eb7ef8d0f192482a6e19eeaf6889164 eb231f6bc57f2400df2646c7e3a8cc6dca9cf64d7cda1de46906f5e16f33411c b6aeb008ac0899969dd86e7e4427aedc845e1220d3c223f7b43bd249f85cb01d e903173ead59b483fd49bc261ea04684d88bdf9cc374e09f820bac87e07595d5 6ee655c2b95666a0cb6d756afb5bed8ac8ec593e912aba8f396ee937c84bfe66 055a7d1fd00e56803e6a94ba810931234ffc39f241f2c00f885ba4e26389d291 d3c1bb1bc0d6ee961b8d838fe8ad84e0a4bfbae821d1fa974f107b70899928a9 824c94f7450fc640e01f87f127b44da67da738649a10652ca9243c5993becd3c f756a6af6c3a38f836b4dbae08e65343de8b166745b15c692df252e9f63c2c1b 52e048a392296ac9c98d1b948b7a4625acf1a6d6963d7adcfc63ebeae0864708 68f7342f168e377b0f925e1e3fb3bdb7ad56128136f5420ee9b26bad5b4750d3 adb6019d5aedf238516350b052c68f892ba50732416d0493c740f6d5da445b66 2d3bc495b7825941d3c7e8141f5b3caf68b3026b31d1697a7686f01c8473b510 c609a5
5d6d5f0eacf3263f45f5dfb6c629bf7b0e8bcc788be1abb00ca7f89917: Process exited with status 1
	stdout:
	b9b1f91e4d882b8045aea2fe177130730eb7ef8d0f192482a6e19eeaf6889164
	eb231f6bc57f2400df2646c7e3a8cc6dca9cf64d7cda1de46906f5e16f33411c
	b6aeb008ac0899969dd86e7e4427aedc845e1220d3c223f7b43bd249f85cb01d
	e903173ead59b483fd49bc261ea04684d88bdf9cc374e09f820bac87e07595d5
	6ee655c2b95666a0cb6d756afb5bed8ac8ec593e912aba8f396ee937c84bfe66
	055a7d1fd00e56803e6a94ba810931234ffc39f241f2c00f885ba4e26389d291
	d3c1bb1bc0d6ee961b8d838fe8ad84e0a4bfbae821d1fa974f107b70899928a9
	
	stderr:
	E1025 22:47:24.798245    3396 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"824c94f7450fc640e01f87f127b44da67da738649a10652ca9243c5993becd3c\": container with ID starting with 824c94f7450fc640e01f87f127b44da67da738649a10652ca9243c5993becd3c not found: ID does not exist" containerID="824c94f7450fc640e01f87f127b44da67da738649a10652ca9243c5993becd3c"
	time="2024-10-25T22:47:24Z" level=fatal msg="stopping the container \"824c94f7450fc640e01f87f127b44da67da738649a10652ca9243c5993becd3c\": rpc error: code = NotFound desc = could not find container \"824c94f7450fc640e01f87f127b44da67da738649a10652ca9243c5993becd3c\": container with ID starting with 824c94f7450fc640e01f87f127b44da67da738649a10652ca9243c5993becd3c not found: ID does not exist"
	I1025 22:47:24.805649  718115 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 22:47:24.855972  718115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:47:24.868700  718115 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Oct 25 22:46 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Oct 25 22:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Oct 25 22:46 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Oct 25 22:46 /etc/kubernetes/scheduler.conf
	
	I1025 22:47:24.868777  718115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:47:24.879480  718115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:47:24.889551  718115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:47:24.898778  718115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:47:24.898828  718115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:47:24.908335  718115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:47:24.917887  718115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:47:24.917940  718115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:47:24.927296  718115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:47:24.936882  718115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:47:24.993353  718115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:47:26.136743  718115 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.143332756s)
	I1025 22:47:26.136791  718115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:47:26.419291  718115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:47:26.511101  718115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:47:26.602643  718115 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:47:26.602770  718115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:47:27.102900  718115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:47:27.602851  718115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:47:27.619066  718115 api_server.go:72] duration metric: took 1.016419175s to wait for apiserver process to appear ...
	I1025 22:47:27.619101  718115 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:47:27.619128  718115 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1025 22:47:25.677499  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:25.678031  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:25.678084  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:25.678008  719919 retry.go:31] will retry after 2.356134856s: waiting for domain to come up
	I1025 22:47:28.035482  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:28.036044  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:28.036074  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:28.036003  719919 retry.go:31] will retry after 2.190402204s: waiting for domain to come up
	I1025 22:47:30.228399  719897 main.go:141] libmachine: (embed-certs-601894) DBG | domain embed-certs-601894 has defined MAC address 52:54:00:35:8f:b1 in network mk-embed-certs-601894
	I1025 22:47:30.228792  719897 main.go:141] libmachine: (embed-certs-601894) DBG | unable to find current IP address of domain embed-certs-601894 in network mk-embed-certs-601894
	I1025 22:47:30.228837  719897 main.go:141] libmachine: (embed-certs-601894) DBG | I1025 22:47:30.228790  719919 retry.go:31] will retry after 3.336943933s: waiting for domain to come up
	I1025 22:47:26.346934  718021 pod_ready.go:103] pod "coredns-7c65d6cfc9-2llzq" in "kube-system" namespace has status "Ready":"False"
	I1025 22:47:28.347316  718021 pod_ready.go:103] pod "coredns-7c65d6cfc9-2llzq" in "kube-system" namespace has status "Ready":"False"
	I1025 22:47:30.847989  718021 pod_ready.go:103] pod "coredns-7c65d6cfc9-2llzq" in "kube-system" namespace has status "Ready":"False"
	I1025 22:47:29.813120  718115 api_server.go:279] https://192.168.39.249:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 22:47:29.813152  718115 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 22:47:29.813167  718115 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1025 22:47:29.871643  718115 api_server.go:279] https://192.168.39.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:47:29.871687  718115 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:47:30.119985  718115 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1025 22:47:30.124282  718115 api_server.go:279] https://192.168.39.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:47:30.124317  718115 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:47:30.619964  718115 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1025 22:47:30.627507  718115 api_server.go:279] https://192.168.39.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:47:30.627539  718115 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:47:31.119675  718115 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1025 22:47:31.133529  718115 api_server.go:279] https://192.168.39.249:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:47:31.133567  718115 api_server.go:103] status: https://192.168.39.249:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:47:31.620178  718115 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1025 22:47:31.627206  718115 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I1025 22:47:31.635386  718115 api_server.go:141] control plane version: v1.31.1
	I1025 22:47:31.635422  718115 api_server.go:131] duration metric: took 4.0163123s to wait for apiserver health ...
	I1025 22:47:31.635434  718115 cni.go:84] Creating CNI manager for ""
	I1025 22:47:31.635443  718115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:47:31.636863  718115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 22:47:31.638519  718115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 22:47:31.654330  718115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 22:47:31.677469  718115 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:47:31.677572  718115 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 22:47:31.677594  718115 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 22:47:31.691024  718115 system_pods.go:59] 8 kube-system pods found
	I1025 22:47:31.691075  718115 system_pods.go:61] "coredns-7c65d6cfc9-pmldp" [19a99004-9c57-4505-8d1b-1479b285d86e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:47:31.691090  718115 system_pods.go:61] "coredns-7c65d6cfc9-q2jzq" [2b931e93-e7ed-4f57-a6f2-846d241e2441] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:47:31.691101  718115 system_pods.go:61] "etcd-kubernetes-upgrade-234842" [e347d030-3552-4f05-98ca-4c6684889602] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 22:47:31.691117  718115 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-234842" [5c6b6a73-f1a5-4fab-918e-7f26caf39054] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:47:31.691127  718115 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-234842" [7432bce2-379e-46c7-ba94-5310d35982ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:47:31.691136  718115 system_pods.go:61] "kube-proxy-s4r8h" [f7fb759f-d8e1-4879-9f57-7bd22856c380] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 22:47:31.691144  718115 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-234842" [9236f5f1-cb9d-4972-b392-6b18f1b41d2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:47:31.691151  718115 system_pods.go:61] "storage-provisioner" [57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 22:47:31.691162  718115 system_pods.go:74] duration metric: took 13.664506ms to wait for pod list to return data ...
	I1025 22:47:31.691176  718115 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:47:31.696585  718115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:47:31.696617  718115 node_conditions.go:123] node cpu capacity is 2
	I1025 22:47:31.696631  718115 node_conditions.go:105] duration metric: took 5.448405ms to run NodePressure ...
	I1025 22:47:31.696656  718115 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:47:32.051889  718115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:47:32.069497  718115 ops.go:34] apiserver oom_adj: -16
	I1025 22:47:32.069526  718115 kubeadm.go:597] duration metric: took 24.226269316s to restartPrimaryControlPlane
	I1025 22:47:32.069539  718115 kubeadm.go:394] duration metric: took 24.523607767s to StartCluster
	I1025 22:47:32.069564  718115 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:47:32.069661  718115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:47:32.071147  718115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:47:32.071665  718115 config.go:182] Loaded profile config "kubernetes-upgrade-234842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:47:32.071732  718115 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:47:32.071797  718115 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 22:47:32.071902  718115 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-234842"
	I1025 22:47:32.071924  718115 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-234842"
	W1025 22:47:32.071932  718115 addons.go:243] addon storage-provisioner should already be in state true
	I1025 22:47:32.071964  718115 host.go:66] Checking if "kubernetes-upgrade-234842" exists ...
	I1025 22:47:32.072382  718115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:32.072417  718115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:32.072524  718115 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-234842"
	I1025 22:47:32.072556  718115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-234842"
	I1025 22:47:32.073082  718115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:32.073140  718115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:32.074896  718115 out.go:177] * Verifying Kubernetes components...
	I1025 22:47:32.076479  718115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:47:32.101093  718115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I1025 22:47:32.101099  718115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I1025 22:47:32.101951  718115 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:32.102087  718115 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:32.102631  718115 main.go:141] libmachine: Using API Version  1
	I1025 22:47:32.102646  718115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:32.102689  718115 main.go:141] libmachine: Using API Version  1
	I1025 22:47:32.102696  718115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:32.102997  718115 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:32.103018  718115 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:32.103132  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetState
	I1025 22:47:32.103479  718115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:32.103505  718115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:32.106541  718115 kapi.go:59] client config for kubernetes-upgrade-234842: &rest.Config{Host:"https://192.168.39.249:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/client.crt", KeyFile:"/home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kubernetes-upgrade-234842/client.key", CAFile:"/home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2439d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 22:47:32.106862  718115 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-234842"
	W1025 22:47:32.106877  718115 addons.go:243] addon default-storageclass should already be in state true
	I1025 22:47:32.106920  718115 host.go:66] Checking if "kubernetes-upgrade-234842" exists ...
	I1025 22:47:32.107233  718115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:32.107268  718115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:32.124479  718115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38011
	I1025 22:47:32.125304  718115 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:32.125979  718115 main.go:141] libmachine: Using API Version  1
	I1025 22:47:32.125999  718115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:32.126440  718115 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:32.126686  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetState
	I1025 22:47:32.127365  718115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34869
	I1025 22:47:32.127797  718115 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:32.128260  718115 main.go:141] libmachine: Using API Version  1
	I1025 22:47:32.128277  718115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:32.128570  718115 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:32.129096  718115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:32.129140  718115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:32.130208  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:47:32.133705  718115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:47:32.135076  718115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:47:32.135102  718115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:47:32.135124  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:47:32.138865  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:47:32.139290  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:45:58 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:47:32.139316  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:47:32.140571  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:47:32.140825  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:47:32.141024  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:47:32.141204  718115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/id_rsa Username:docker}
	I1025 22:47:32.148437  718115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39025
	I1025 22:47:32.149454  718115 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:32.150132  718115 main.go:141] libmachine: Using API Version  1
	I1025 22:47:32.150155  718115 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:32.150557  718115 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:32.150791  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetState
	I1025 22:47:32.152622  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .DriverName
	I1025 22:47:32.152843  718115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:47:32.152861  718115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:47:32.152881  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHHostname
	I1025 22:47:32.156016  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:47:32.156395  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:67:a6", ip: ""} in network mk-kubernetes-upgrade-234842: {Iface:virbr3 ExpiryTime:2024-10-25 23:45:58 +0000 UTC Type:0 Mac:52:54:00:6a:67:a6 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:kubernetes-upgrade-234842 Clientid:01:52:54:00:6a:67:a6}
	I1025 22:47:32.156411  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | domain kubernetes-upgrade-234842 has defined IP address 192.168.39.249 and MAC address 52:54:00:6a:67:a6 in network mk-kubernetes-upgrade-234842
	I1025 22:47:32.156661  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHPort
	I1025 22:47:32.157129  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHKeyPath
	I1025 22:47:32.157241  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .GetSSHUsername
	I1025 22:47:32.157368  718115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/kubernetes-upgrade-234842/id_rsa Username:docker}
	I1025 22:47:32.295169  718115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:47:32.312941  718115 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:47:32.313054  718115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:47:32.327418  718115 api_server.go:72] duration metric: took 255.654941ms to wait for apiserver process to appear ...
	I1025 22:47:32.327446  718115 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:47:32.327468  718115 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1025 22:47:32.332570  718115 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I1025 22:47:32.333595  718115 api_server.go:141] control plane version: v1.31.1
	I1025 22:47:32.333620  718115 api_server.go:131] duration metric: took 6.167239ms to wait for apiserver health ...
	I1025 22:47:32.333628  718115 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:47:32.339221  718115 system_pods.go:59] 8 kube-system pods found
	I1025 22:47:32.339252  718115 system_pods.go:61] "coredns-7c65d6cfc9-pmldp" [19a99004-9c57-4505-8d1b-1479b285d86e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:47:32.339259  718115 system_pods.go:61] "coredns-7c65d6cfc9-q2jzq" [2b931e93-e7ed-4f57-a6f2-846d241e2441] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:47:32.339272  718115 system_pods.go:61] "etcd-kubernetes-upgrade-234842" [e347d030-3552-4f05-98ca-4c6684889602] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 22:47:32.339279  718115 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-234842" [5c6b6a73-f1a5-4fab-918e-7f26caf39054] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:47:32.339287  718115 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-234842" [7432bce2-379e-46c7-ba94-5310d35982ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:47:32.339294  718115 system_pods.go:61] "kube-proxy-s4r8h" [f7fb759f-d8e1-4879-9f57-7bd22856c380] Running
	I1025 22:47:32.339300  718115 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-234842" [9236f5f1-cb9d-4972-b392-6b18f1b41d2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:47:32.339306  718115 system_pods.go:61] "storage-provisioner" [57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd] Running
	I1025 22:47:32.339312  718115 system_pods.go:74] duration metric: took 5.67919ms to wait for pod list to return data ...
	I1025 22:47:32.339321  718115 kubeadm.go:582] duration metric: took 267.564695ms to wait for: map[apiserver:true system_pods:true]
	I1025 22:47:32.339335  718115 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:47:32.342447  718115 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:47:32.342471  718115 node_conditions.go:123] node cpu capacity is 2
	I1025 22:47:32.342481  718115 node_conditions.go:105] duration metric: took 3.142432ms to run NodePressure ...
	I1025 22:47:32.342493  718115 start.go:241] waiting for startup goroutines ...
	I1025 22:47:32.451361  718115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:47:32.469839  718115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:47:33.220416  718115 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:33.220439  718115 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:33.220462  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .Close
	I1025 22:47:33.220449  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .Close
	I1025 22:47:33.220789  718115 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:33.220807  718115 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:33.220816  718115 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:33.220823  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .Close
	I1025 22:47:33.220828  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | Closing plugin on server side
	I1025 22:47:33.220793  718115 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:33.220850  718115 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:33.220858  718115 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:33.220865  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .Close
	I1025 22:47:33.221094  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) DBG | Closing plugin on server side
	I1025 22:47:33.221144  718115 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:33.221189  718115 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:33.221217  718115 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:33.221227  718115 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:33.230799  718115 main.go:141] libmachine: Making call to close driver server
	I1025 22:47:33.230826  718115 main.go:141] libmachine: (kubernetes-upgrade-234842) Calling .Close
	I1025 22:47:33.231091  718115 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:47:33.231135  718115 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:47:33.232504  718115 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1025 22:47:33.233755  718115 addons.go:510] duration metric: took 1.161974166s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 22:47:33.233787  718115 start.go:246] waiting for cluster config update ...
	I1025 22:47:33.233799  718115 start.go:255] writing updated cluster config ...
	I1025 22:47:33.234029  718115 ssh_runner.go:195] Run: rm -f paused
	I1025 22:47:33.297739  718115 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 22:47:33.299456  718115 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-234842" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.050586974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729896454050562992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb379624-868c-453a-971e-ff2af7994d45 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.051285074Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f18fda30-c473-4d35-963f-ad1aed949f84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.051399425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f18fda30-c473-4d35-963f-ad1aed949f84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.051950069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55cd39b780d874d4728b4d60d7a1492d4909a76799c1dd226bcb06fe0587efb2,PodSandboxId:ef80295e724807765f06d5e19bde01ee4fb53dc7703f3299dcfa4f4830c87f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729896450897833217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pmldp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a99004-9c57-4505-8d1b-1479b285d86e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c0588bda9d332a5932dd7f667218f4181c92091f4edbd2d6d26f1462774776,PodSandboxId:90661059cbe46d91e4b0f483f9fb5d9407326395aa7404f1a2da59cae58e743e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729896450883312336,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s4r8h,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f7fb759f-d8e1-4879-9f57-7bd22856c380,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cb2e1648824cb3ebe8b0bfde150cd73534227f0300e0c9074b946dc91b4415,PodSandboxId:2ea5111b8ceb39136c1a8927c2f1c3c71b8cadbdf0da68a3312ceb770cc93c1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729896450907937780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b7a92cb48427514661ac28baa33696a4dc2fe09271e0de45da77b839a538a4,PodSandboxId:c19c08f1ad16075ca811ef755d96917f0517b1b16537bb1f81dbdd3b095491c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729896450873548942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q2jzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b931e93-e7ed-4f57-a6f2-84
6d241e2441,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec26aa28081a0848b4631cd82c1ca4b998b4f0e7fa95ee547a94dc089c5fe79a,PodSandboxId:0b9d8435b3851f2e57c3c0c7c4e1184cfa399888c19f2b9a9bf5457991bf9782,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1729896447095684719,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7986e9b4abf3a95e80babc0d0828d51,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f358619968d2da00408069edf2611f84f868575f7165fa8fece8aef24a8c6886,PodSandboxId:56a9d631cf73ba6edd10276f211d53d74c5a99a07c11970ce4174b007dc1d39b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1729896447072872
867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7206da0a7c04043eac83a915eab48c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05de3e64c8739ef3c98a8c77d8ce77e13f27d723bcaa68e8d4a512e205d9382,PodSandboxId:153adeff87dc7b1bf2e5e25652458f3734ad410ebf69e46c9a65eaa1b99bf641,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
9896447082468029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 141207779613f0bea9b8dca7f9f7a214,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bd640ec61503c0d67b4e4f9985d4453a86f95053503af651edb498817996c9,PodSandboxId:0b3824fa33c8d266d534baf70666b6e794e9305bf0e80b465c0c411c856d373c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172989643194519252
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85e3d522a6cbc01a3759274f3439e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b1f91e4d882b8045aea2fe177130730eb7ef8d0f192482a6e19eeaf6889164,PodSandboxId:ef80295e724807765f06d5e19bde01ee4fb53dc7703f3299dcfa4f4830c87f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1729896427686759600,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pmldp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a99004-9c57-4505-8d1b-1479b285d86e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb231f6bc57f2400df2646c7e3a8cc6dca9cf64d7cda1de46906f5e16f33411c,PodSandboxId:c19c08f1ad16075ca811ef755d96917f0517b1b16537bb1f81dbdd3b095491c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1729896427509019337,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q2jzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b931e93-e7ed-4f57-a6f2-846d241e2441,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903173ead59b483fd49bc261ea04684d88bdf9cc374e09f820bac87e07595d5,PodSandboxId:2ea5111b8ceb39136c1a8927c2f1c3c71b8cadbdf0da68
a3312ceb770cc93c1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729896427185256266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6aeb008ac0899969dd86e7e4427aedc845e1220d3c223f7b43bd249f85cb01d,PodSandboxId:90661059cbe46d91e4b0f483f9fb5d9407326395aa7404f1a2da59cae58
e743e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1729896427198459696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s4r8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7fb759f-d8e1-4879-9f57-7bd22856c380,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee655c2b95666a0cb6d756afb5bed8ac8ec593e912aba8f396ee937c84bfe66,PodSandboxId:0b9d8435b3851f2e57c3c0c7c4e1184cfa399888c19f2b9a9bf5457991bf9782,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1729896426504417511,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7986e9b4abf3a95e80babc0d0828d51,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055a7d1fd00e56803e6a94ba810931234ffc39f241f2c00f885ba4e26389d291,PodSandboxId:153adeff87dc7b1bf2e5e25652458f3734ad410ebf69e46c9a65eaa1b99bf641,Metadata:&ContainerMetadata{Name:k
ube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1729896426359750815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 141207779613f0bea9b8dca7f9f7a214,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1bb1bc0d6ee961b8d838fe8ad84e0a4bfbae821d1fa974f107b70899928a9,PodSandboxId:3d2f783e9151f93de2823641a39017c8492c81bb0109e18b9d42b47fd17fa34e,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1729896424928720580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7206da0a7c04043eac83a915eab48c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c609a55d6d5f0eacf3263f45f5dfb6c629bf7b0e8bcc788be1abb00ca7f89917,PodSandboxId:b9da248193264862fdad1e142b5e639217718b3105d7857d5e09de52e76715a5,Metadata:&Container
Metadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1729896375153825126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85e3d522a6cbc01a3759274f3439e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f18fda30-c473-4d35-963f-ad1aed949f84 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.108456753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32408772-d493-4d3c-a804-8f2a13cf084c name=/runtime.v1.RuntimeService/Version
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.108552073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32408772-d493-4d3c-a804-8f2a13cf084c name=/runtime.v1.RuntimeService/Version
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.110264288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b816b028-f541-4214-8b80-9a764df8442d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.110652658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729896454110628172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b816b028-f541-4214-8b80-9a764df8442d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.111295333Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58e5186b-59d6-4626-82dd-48dc33268848 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.111373056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58e5186b-59d6-4626-82dd-48dc33268848 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.111793759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55cd39b780d874d4728b4d60d7a1492d4909a76799c1dd226bcb06fe0587efb2,PodSandboxId:ef80295e724807765f06d5e19bde01ee4fb53dc7703f3299dcfa4f4830c87f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729896450897833217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pmldp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a99004-9c57-4505-8d1b-1479b285d86e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c0588bda9d332a5932dd7f667218f4181c92091f4edbd2d6d26f1462774776,PodSandboxId:90661059cbe46d91e4b0f483f9fb5d9407326395aa7404f1a2da59cae58e743e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729896450883312336,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s4r8h,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f7fb759f-d8e1-4879-9f57-7bd22856c380,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cb2e1648824cb3ebe8b0bfde150cd73534227f0300e0c9074b946dc91b4415,PodSandboxId:2ea5111b8ceb39136c1a8927c2f1c3c71b8cadbdf0da68a3312ceb770cc93c1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729896450907937780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b7a92cb48427514661ac28baa33696a4dc2fe09271e0de45da77b839a538a4,PodSandboxId:c19c08f1ad16075ca811ef755d96917f0517b1b16537bb1f81dbdd3b095491c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729896450873548942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q2jzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b931e93-e7ed-4f57-a6f2-84
6d241e2441,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec26aa28081a0848b4631cd82c1ca4b998b4f0e7fa95ee547a94dc089c5fe79a,PodSandboxId:0b9d8435b3851f2e57c3c0c7c4e1184cfa399888c19f2b9a9bf5457991bf9782,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1729896447095684719,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7986e9b4abf3a95e80babc0d0828d51,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f358619968d2da00408069edf2611f84f868575f7165fa8fece8aef24a8c6886,PodSandboxId:56a9d631cf73ba6edd10276f211d53d74c5a99a07c11970ce4174b007dc1d39b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1729896447072872
867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7206da0a7c04043eac83a915eab48c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05de3e64c8739ef3c98a8c77d8ce77e13f27d723bcaa68e8d4a512e205d9382,PodSandboxId:153adeff87dc7b1bf2e5e25652458f3734ad410ebf69e46c9a65eaa1b99bf641,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
9896447082468029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 141207779613f0bea9b8dca7f9f7a214,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bd640ec61503c0d67b4e4f9985d4453a86f95053503af651edb498817996c9,PodSandboxId:0b3824fa33c8d266d534baf70666b6e794e9305bf0e80b465c0c411c856d373c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172989643194519252
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85e3d522a6cbc01a3759274f3439e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b1f91e4d882b8045aea2fe177130730eb7ef8d0f192482a6e19eeaf6889164,PodSandboxId:ef80295e724807765f06d5e19bde01ee4fb53dc7703f3299dcfa4f4830c87f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1729896427686759600,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pmldp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a99004-9c57-4505-8d1b-1479b285d86e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb231f6bc57f2400df2646c7e3a8cc6dca9cf64d7cda1de46906f5e16f33411c,PodSandboxId:c19c08f1ad16075ca811ef755d96917f0517b1b16537bb1f81dbdd3b095491c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1729896427509019337,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q2jzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b931e93-e7ed-4f57-a6f2-846d241e2441,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903173ead59b483fd49bc261ea04684d88bdf9cc374e09f820bac87e07595d5,PodSandboxId:2ea5111b8ceb39136c1a8927c2f1c3c71b8cadbdf0da68
a3312ceb770cc93c1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729896427185256266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6aeb008ac0899969dd86e7e4427aedc845e1220d3c223f7b43bd249f85cb01d,PodSandboxId:90661059cbe46d91e4b0f483f9fb5d9407326395aa7404f1a2da59cae58
e743e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1729896427198459696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s4r8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7fb759f-d8e1-4879-9f57-7bd22856c380,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee655c2b95666a0cb6d756afb5bed8ac8ec593e912aba8f396ee937c84bfe66,PodSandboxId:0b9d8435b3851f2e57c3c0c7c4e1184cfa399888c19f2b9a9bf5457991bf9782,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1729896426504417511,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7986e9b4abf3a95e80babc0d0828d51,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055a7d1fd00e56803e6a94ba810931234ffc39f241f2c00f885ba4e26389d291,PodSandboxId:153adeff87dc7b1bf2e5e25652458f3734ad410ebf69e46c9a65eaa1b99bf641,Metadata:&ContainerMetadata{Name:k
ube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1729896426359750815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 141207779613f0bea9b8dca7f9f7a214,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1bb1bc0d6ee961b8d838fe8ad84e0a4bfbae821d1fa974f107b70899928a9,PodSandboxId:3d2f783e9151f93de2823641a39017c8492c81bb0109e18b9d42b47fd17fa34e,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1729896424928720580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7206da0a7c04043eac83a915eab48c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c609a55d6d5f0eacf3263f45f5dfb6c629bf7b0e8bcc788be1abb00ca7f89917,PodSandboxId:b9da248193264862fdad1e142b5e639217718b3105d7857d5e09de52e76715a5,Metadata:&Container
Metadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1729896375153825126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85e3d522a6cbc01a3759274f3439e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58e5186b-59d6-4626-82dd-48dc33268848 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.157627332Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43b07b8c-0ca4-4563-a4ab-6f73c766f659 name=/runtime.v1.RuntimeService/Version
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.157697716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43b07b8c-0ca4-4563-a4ab-6f73c766f659 name=/runtime.v1.RuntimeService/Version
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.158748574Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ba0619a-b659-41f3-ba25-dc9e494ce1e1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.159200584Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729896454159173265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ba0619a-b659-41f3-ba25-dc9e494ce1e1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.159824057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56320152-425f-4ba3-b7e1-12b901153195 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.159875314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56320152-425f-4ba3-b7e1-12b901153195 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.160325573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55cd39b780d874d4728b4d60d7a1492d4909a76799c1dd226bcb06fe0587efb2,PodSandboxId:ef80295e724807765f06d5e19bde01ee4fb53dc7703f3299dcfa4f4830c87f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729896450897833217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pmldp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a99004-9c57-4505-8d1b-1479b285d86e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c0588bda9d332a5932dd7f667218f4181c92091f4edbd2d6d26f1462774776,PodSandboxId:90661059cbe46d91e4b0f483f9fb5d9407326395aa7404f1a2da59cae58e743e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729896450883312336,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s4r8h,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f7fb759f-d8e1-4879-9f57-7bd22856c380,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cb2e1648824cb3ebe8b0bfde150cd73534227f0300e0c9074b946dc91b4415,PodSandboxId:2ea5111b8ceb39136c1a8927c2f1c3c71b8cadbdf0da68a3312ceb770cc93c1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729896450907937780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b7a92cb48427514661ac28baa33696a4dc2fe09271e0de45da77b839a538a4,PodSandboxId:c19c08f1ad16075ca811ef755d96917f0517b1b16537bb1f81dbdd3b095491c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729896450873548942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q2jzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b931e93-e7ed-4f57-a6f2-84
6d241e2441,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec26aa28081a0848b4631cd82c1ca4b998b4f0e7fa95ee547a94dc089c5fe79a,PodSandboxId:0b9d8435b3851f2e57c3c0c7c4e1184cfa399888c19f2b9a9bf5457991bf9782,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1729896447095684719,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7986e9b4abf3a95e80babc0d0828d51,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f358619968d2da00408069edf2611f84f868575f7165fa8fece8aef24a8c6886,PodSandboxId:56a9d631cf73ba6edd10276f211d53d74c5a99a07c11970ce4174b007dc1d39b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1729896447072872
867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7206da0a7c04043eac83a915eab48c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05de3e64c8739ef3c98a8c77d8ce77e13f27d723bcaa68e8d4a512e205d9382,PodSandboxId:153adeff87dc7b1bf2e5e25652458f3734ad410ebf69e46c9a65eaa1b99bf641,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
9896447082468029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 141207779613f0bea9b8dca7f9f7a214,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bd640ec61503c0d67b4e4f9985d4453a86f95053503af651edb498817996c9,PodSandboxId:0b3824fa33c8d266d534baf70666b6e794e9305bf0e80b465c0c411c856d373c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172989643194519252
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85e3d522a6cbc01a3759274f3439e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b1f91e4d882b8045aea2fe177130730eb7ef8d0f192482a6e19eeaf6889164,PodSandboxId:ef80295e724807765f06d5e19bde01ee4fb53dc7703f3299dcfa4f4830c87f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1729896427686759600,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pmldp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a99004-9c57-4505-8d1b-1479b285d86e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb231f6bc57f2400df2646c7e3a8cc6dca9cf64d7cda1de46906f5e16f33411c,PodSandboxId:c19c08f1ad16075ca811ef755d96917f0517b1b16537bb1f81dbdd3b095491c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1729896427509019337,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q2jzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b931e93-e7ed-4f57-a6f2-846d241e2441,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903173ead59b483fd49bc261ea04684d88bdf9cc374e09f820bac87e07595d5,PodSandboxId:2ea5111b8ceb39136c1a8927c2f1c3c71b8cadbdf0da68
a3312ceb770cc93c1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729896427185256266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6aeb008ac0899969dd86e7e4427aedc845e1220d3c223f7b43bd249f85cb01d,PodSandboxId:90661059cbe46d91e4b0f483f9fb5d9407326395aa7404f1a2da59cae58
e743e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1729896427198459696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s4r8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7fb759f-d8e1-4879-9f57-7bd22856c380,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee655c2b95666a0cb6d756afb5bed8ac8ec593e912aba8f396ee937c84bfe66,PodSandboxId:0b9d8435b3851f2e57c3c0c7c4e1184cfa399888c19f2b9a9bf5457991bf9782,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1729896426504417511,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7986e9b4abf3a95e80babc0d0828d51,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055a7d1fd00e56803e6a94ba810931234ffc39f241f2c00f885ba4e26389d291,PodSandboxId:153adeff87dc7b1bf2e5e25652458f3734ad410ebf69e46c9a65eaa1b99bf641,Metadata:&ContainerMetadata{Name:k
ube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1729896426359750815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 141207779613f0bea9b8dca7f9f7a214,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1bb1bc0d6ee961b8d838fe8ad84e0a4bfbae821d1fa974f107b70899928a9,PodSandboxId:3d2f783e9151f93de2823641a39017c8492c81bb0109e18b9d42b47fd17fa34e,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1729896424928720580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7206da0a7c04043eac83a915eab48c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c609a55d6d5f0eacf3263f45f5dfb6c629bf7b0e8bcc788be1abb00ca7f89917,PodSandboxId:b9da248193264862fdad1e142b5e639217718b3105d7857d5e09de52e76715a5,Metadata:&Container
Metadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1729896375153825126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85e3d522a6cbc01a3759274f3439e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56320152-425f-4ba3-b7e1-12b901153195 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.192969328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad41212c-6107-4826-bd80-7cec4030a226 name=/runtime.v1.RuntimeService/Version
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.193039022Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad41212c-6107-4826-bd80-7cec4030a226 name=/runtime.v1.RuntimeService/Version
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.194595053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0dce7099-9206-4334-92f2-10bea9f5ed0e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.194949230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729896454194925156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dce7099-9206-4334-92f2-10bea9f5ed0e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.195541514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44bf4ba5-c358-453f-83ae-90842a21a9a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.195592948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44bf4ba5-c358-453f-83ae-90842a21a9a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 22:47:34 kubernetes-upgrade-234842 crio[2627]: time="2024-10-25 22:47:34.195914779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55cd39b780d874d4728b4d60d7a1492d4909a76799c1dd226bcb06fe0587efb2,PodSandboxId:ef80295e724807765f06d5e19bde01ee4fb53dc7703f3299dcfa4f4830c87f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729896450897833217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pmldp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a99004-9c57-4505-8d1b-1479b285d86e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9c0588bda9d332a5932dd7f667218f4181c92091f4edbd2d6d26f1462774776,PodSandboxId:90661059cbe46d91e4b0f483f9fb5d9407326395aa7404f1a2da59cae58e743e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1729896450883312336,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s4r8h,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: f7fb759f-d8e1-4879-9f57-7bd22856c380,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4cb2e1648824cb3ebe8b0bfde150cd73534227f0300e0c9074b946dc91b4415,PodSandboxId:2ea5111b8ceb39136c1a8927c2f1c3c71b8cadbdf0da68a3312ceb770cc93c1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1729896450907937780,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62b7a92cb48427514661ac28baa33696a4dc2fe09271e0de45da77b839a538a4,PodSandboxId:c19c08f1ad16075ca811ef755d96917f0517b1b16537bb1f81dbdd3b095491c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1729896450873548942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q2jzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b931e93-e7ed-4f57-a6f2-84
6d241e2441,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec26aa28081a0848b4631cd82c1ca4b998b4f0e7fa95ee547a94dc089c5fe79a,PodSandboxId:0b9d8435b3851f2e57c3c0c7c4e1184cfa399888c19f2b9a9bf5457991bf9782,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1729896447095684719,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7986e9b4abf3a95e80babc0d0828d51,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f358619968d2da00408069edf2611f84f868575f7165fa8fece8aef24a8c6886,PodSandboxId:56a9d631cf73ba6edd10276f211d53d74c5a99a07c11970ce4174b007dc1d39b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1729896447072872
867,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7206da0a7c04043eac83a915eab48c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05de3e64c8739ef3c98a8c77d8ce77e13f27d723bcaa68e8d4a512e205d9382,PodSandboxId:153adeff87dc7b1bf2e5e25652458f3734ad410ebf69e46c9a65eaa1b99bf641,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
9896447082468029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 141207779613f0bea9b8dca7f9f7a214,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9bd640ec61503c0d67b4e4f9985d4453a86f95053503af651edb498817996c9,PodSandboxId:0b3824fa33c8d266d534baf70666b6e794e9305bf0e80b465c0c411c856d373c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172989643194519252
3,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85e3d522a6cbc01a3759274f3439e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9b1f91e4d882b8045aea2fe177130730eb7ef8d0f192482a6e19eeaf6889164,PodSandboxId:ef80295e724807765f06d5e19bde01ee4fb53dc7703f3299dcfa4f4830c87f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1729896427686759600,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pmldp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a99004-9c57-4505-8d1b-1479b285d86e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb231f6bc57f2400df2646c7e3a8cc6dca9cf64d7cda1de46906f5e16f33411c,PodSandboxId:c19c08f1ad16075ca811ef755d96917f0517b1b16537bb1f81dbdd3b095491c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1729896427509019337,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-q2jzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b931e93-e7ed-4f57-a6f2-846d241e2441,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e903173ead59b483fd49bc261ea04684d88bdf9cc374e09f820bac87e07595d5,PodSandboxId:2ea5111b8ceb39136c1a8927c2f1c3c71b8cadbdf0da68
a3312ceb770cc93c1c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1729896427185256266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6aeb008ac0899969dd86e7e4427aedc845e1220d3c223f7b43bd249f85cb01d,PodSandboxId:90661059cbe46d91e4b0f483f9fb5d9407326395aa7404f1a2da59cae58
e743e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1729896427198459696,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s4r8h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7fb759f-d8e1-4879-9f57-7bd22856c380,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ee655c2b95666a0cb6d756afb5bed8ac8ec593e912aba8f396ee937c84bfe66,PodSandboxId:0b9d8435b3851f2e57c3c0c7c4e1184cfa399888c19f2b9a9bf5457991bf9782,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1729896426504417511,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7986e9b4abf3a95e80babc0d0828d51,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055a7d1fd00e56803e6a94ba810931234ffc39f241f2c00f885ba4e26389d291,PodSandboxId:153adeff87dc7b1bf2e5e25652458f3734ad410ebf69e46c9a65eaa1b99bf641,Metadata:&ContainerMetadata{Name:k
ube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1729896426359750815,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 141207779613f0bea9b8dca7f9f7a214,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3c1bb1bc0d6ee961b8d838fe8ad84e0a4bfbae821d1fa974f107b70899928a9,PodSandboxId:3d2f783e9151f93de2823641a39017c8492c81bb0109e18b9d42b47fd17fa34e,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1729896424928720580,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f7206da0a7c04043eac83a915eab48c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c609a55d6d5f0eacf3263f45f5dfb6c629bf7b0e8bcc788be1abb00ca7f89917,PodSandboxId:b9da248193264862fdad1e142b5e639217718b3105d7857d5e09de52e76715a5,Metadata:&Container
Metadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1729896375153825126,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-234842,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc85e3d522a6cbc01a3759274f3439e8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44bf4ba5-c358-453f-83ae-90842a21a9a5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c4cb2e1648824       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   2ea5111b8ceb3       storage-provisioner
	55cd39b780d87       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago        Running             coredns                   2                   ef80295e72480       coredns-7c65d6cfc9-pmldp
	f9c0588bda9d3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago        Running             kube-proxy                2                   90661059cbe46       kube-proxy-s4r8h
	62b7a92cb4842       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago        Running             coredns                   2                   c19c08f1ad160       coredns-7c65d6cfc9-q2jzq
	ec26aa28081a0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago        Running             kube-scheduler            2                   0b9d8435b3851       kube-scheduler-kubernetes-upgrade-234842
	c05de3e64c873       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago        Running             kube-apiserver            2                   153adeff87dc7       kube-apiserver-kubernetes-upgrade-234842
	f358619968d2d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago        Running             kube-controller-manager   2                   56a9d631cf73b       kube-controller-manager-kubernetes-upgrade-234842
	d9bd640ec6150       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   22 seconds ago       Running             etcd                      1                   0b3824fa33c8d       etcd-kubernetes-upgrade-234842
	b9b1f91e4d882       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   26 seconds ago       Exited              coredns                   1                   ef80295e72480       coredns-7c65d6cfc9-pmldp
	eb231f6bc57f2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   26 seconds ago       Exited              coredns                   1                   c19c08f1ad160       coredns-7c65d6cfc9-q2jzq
	b6aeb008ac089       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   27 seconds ago       Exited              kube-proxy                1                   90661059cbe46       kube-proxy-s4r8h
	e903173ead59b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   27 seconds ago       Exited              storage-provisioner       1                   2ea5111b8ceb3       storage-provisioner
	6ee655c2b9566       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   27 seconds ago       Exited              kube-scheduler            1                   0b9d8435b3851       kube-scheduler-kubernetes-upgrade-234842
	055a7d1fd00e5       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   27 seconds ago       Exited              kube-apiserver            1                   153adeff87dc7       kube-apiserver-kubernetes-upgrade-234842
	d3c1bb1bc0d6e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   29 seconds ago       Exited              kube-controller-manager   1                   3d2f783e9151f       kube-controller-manager-kubernetes-upgrade-234842
	c609a55d6d5f0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   b9da248193264       etcd-kubernetes-upgrade-234842
	
	
	==> coredns [55cd39b780d874d4728b4d60d7a1492d4909a76799c1dd226bcb06fe0587efb2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [62b7a92cb48427514661ac28baa33696a4dc2fe09271e0de45da77b839a538a4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b9b1f91e4d882b8045aea2fe177130730eb7ef8d0f192482a6e19eeaf6889164] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eb231f6bc57f2400df2646c7e3a8cc6dca9cf64d7cda1de46906f5e16f33411c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-234842
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-234842
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Oct 2024 22:46:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-234842
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Oct 2024 22:47:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Oct 2024 22:47:29 +0000   Fri, 25 Oct 2024 22:46:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Oct 2024 22:47:29 +0000   Fri, 25 Oct 2024 22:46:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Oct 2024 22:47:29 +0000   Fri, 25 Oct 2024 22:46:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 25 Oct 2024 22:47:29 +0000   Fri, 25 Oct 2024 22:46:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    kubernetes-upgrade-234842
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c74d6a1eb8546019d82f3eb1cc39b9a
	  System UUID:                4c74d6a1-eb85-4601-9d82-f3eb1cc39b9a
	  Boot ID:                    20ceee44-08a9-4337-8d11-11e890bc9dbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-pmldp                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     68s
	  kube-system                 coredns-7c65d6cfc9-q2jzq                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     68s
	  kube-system                 etcd-kubernetes-upgrade-234842                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	  kube-system                 kube-apiserver-kubernetes-upgrade-234842             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-234842    200m (10%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-proxy-s4r8h                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-scheduler-kubernetes-upgrade-234842             100m (5%)     0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 67s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  80s (x8 over 80s)  kubelet          Node kubernetes-upgrade-234842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x8 over 80s)  kubelet          Node kubernetes-upgrade-234842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x7 over 80s)  kubelet          Node kubernetes-upgrade-234842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                node-controller  Node kubernetes-upgrade-234842 event: Registered Node kubernetes-upgrade-234842 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-234842 event: Registered Node kubernetes-upgrade-234842 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct25 22:46] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.062382] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061140] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.180143] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.171987] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.295539] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +5.275183] systemd-fstab-generator[721]: Ignoring "noauto" option for root device
	[  +0.069280] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.404578] systemd-fstab-generator[844]: Ignoring "noauto" option for root device
	[  +7.650967] systemd-fstab-generator[1240]: Ignoring "noauto" option for root device
	[  +0.083805] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.004711] kauditd_printk_skb: 51 callbacks suppressed
	[Oct25 22:47] systemd-fstab-generator[2187]: Ignoring "noauto" option for root device
	[  +0.094153] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.101525] systemd-fstab-generator[2215]: Ignoring "noauto" option for root device
	[  +0.188328] systemd-fstab-generator[2283]: Ignoring "noauto" option for root device
	[  +0.168897] systemd-fstab-generator[2295]: Ignoring "noauto" option for root device
	[  +0.488900] systemd-fstab-generator[2414]: Ignoring "noauto" option for root device
	[  +1.174471] systemd-fstab-generator[2736]: Ignoring "noauto" option for root device
	[  +6.018941] kauditd_printk_skb: 251 callbacks suppressed
	[  +6.399552] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.007849] systemd-fstab-generator[3771]: Ignoring "noauto" option for root device
	[  +4.746359] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.123557] systemd-fstab-generator[4263]: Ignoring "noauto" option for root device
	
	
	==> etcd [c609a55d6d5f0eacf3263f45f5dfb6c629bf7b0e8bcc788be1abb00ca7f89917] <==
	{"level":"info","ts":"2024-10-25T22:46:16.353694Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T22:46:16.353725Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T22:46:16.357889Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-25T22:46:16.359243Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-25T22:46:16.361919Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-25T22:46:16.360932Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-25T22:46:16.369863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.249:2379"}
	{"level":"info","ts":"2024-10-25T22:46:16.376758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-25T22:46:39.058580Z","caller":"traceutil/trace.go:171","msg":"trace[338960139] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"174.420652ms","start":"2024-10-25T22:46:38.884131Z","end":"2024-10-25T22:46:39.058552Z","steps":["trace[338960139] 'process raft request'  (duration: 174.289391ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T22:46:39.366337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.89721ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T22:46:39.366479Z","caller":"traceutil/trace.go:171","msg":"trace[92895601] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:388; }","duration":"132.082805ms","start":"2024-10-25T22:46:39.234383Z","end":"2024-10-25T22:46:39.366465Z","steps":["trace[92895601] 'range keys from in-memory index tree'  (duration: 131.882991ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-25T22:46:39.366703Z","caller":"traceutil/trace.go:171","msg":"trace[2056383792] linearizableReadLoop","detail":"{readStateIndex:400; appliedIndex:399; }","duration":"146.17009ms","start":"2024-10-25T22:46:39.220524Z","end":"2024-10-25T22:46:39.366694Z","steps":["trace[2056383792] 'read index received'  (duration: 145.802117ms)","trace[2056383792] 'applied index is now lower than readState.Index'  (duration: 367.434µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-25T22:46:39.366949Z","caller":"traceutil/trace.go:171","msg":"trace[76892151] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"209.668231ms","start":"2024-10-25T22:46:39.157270Z","end":"2024-10-25T22:46:39.366938Z","steps":["trace[76892151] 'process raft request'  (duration: 209.126445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-25T22:46:39.367183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.668161ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-25T22:46:39.367223Z","caller":"traceutil/trace.go:171","msg":"trace[586544779] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:389; }","duration":"146.722975ms","start":"2024-10-25T22:46:39.220494Z","end":"2024-10-25T22:46:39.367217Z","steps":["trace[586544779] 'agreement among raft nodes before linearized reading'  (duration: 146.654108ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-25T22:46:56.787492Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-25T22:46:56.787639Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-234842","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	{"level":"warn","ts":"2024-10-25T22:46:56.787736Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-25T22:46:56.787838Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-25T22:46:56.846884Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-25T22:46:56.846946Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.249:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-25T22:46:56.847022Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"318ee90c3446d547","current-leader-member-id":"318ee90c3446d547"}
	{"level":"info","ts":"2024-10-25T22:46:56.850643Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2024-10-25T22:46:56.850780Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2024-10-25T22:46:56.850827Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-234842","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"]}
	
	
	==> etcd [d9bd640ec61503c0d67b4e4f9985d4453a86f95053503af651edb498817996c9] <==
	{"level":"info","ts":"2024-10-25T22:47:12.112651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 switched to configuration voters=(3571047793177318727)"}
	{"level":"info","ts":"2024-10-25T22:47:12.112751Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ba21282e7acd13d6","local-member-id":"318ee90c3446d547","added-peer-id":"318ee90c3446d547","added-peer-peer-urls":["https://192.168.39.249:2380"]}
	{"level":"info","ts":"2024-10-25T22:47:12.112977Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ba21282e7acd13d6","local-member-id":"318ee90c3446d547","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T22:47:12.113019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T22:47:12.119826Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-25T22:47:12.120118Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2024-10-25T22:47:12.120154Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.249:2380"}
	{"level":"info","ts":"2024-10-25T22:47:12.120768Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"318ee90c3446d547","initial-advertise-peer-urls":["https://192.168.39.249:2380"],"listen-peer-urls":["https://192.168.39.249:2380"],"advertise-client-urls":["https://192.168.39.249:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.249:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-25T22:47:12.120846Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-25T22:47:13.199378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-25T22:47:13.199505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-25T22:47:13.199570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgPreVoteResp from 318ee90c3446d547 at term 2"}
	{"level":"info","ts":"2024-10-25T22:47:13.199612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became candidate at term 3"}
	{"level":"info","ts":"2024-10-25T22:47:13.199645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 received MsgVoteResp from 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2024-10-25T22:47:13.199682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"318ee90c3446d547 became leader at term 3"}
	{"level":"info","ts":"2024-10-25T22:47:13.199717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 318ee90c3446d547 elected leader 318ee90c3446d547 at term 3"}
	{"level":"info","ts":"2024-10-25T22:47:13.205296Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"318ee90c3446d547","local-member-attributes":"{Name:kubernetes-upgrade-234842 ClientURLs:[https://192.168.39.249:2379]}","request-path":"/0/members/318ee90c3446d547/attributes","cluster-id":"ba21282e7acd13d6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-25T22:47:13.205323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-25T22:47:13.205688Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-25T22:47:13.205735Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-25T22:47:13.205365Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-25T22:47:13.207363Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-25T22:47:13.207533Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-25T22:47:13.208718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.249:2379"}
	{"level":"info","ts":"2024-10-25T22:47:13.209287Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:47:34 up 1 min,  0 users,  load average: 1.02, 0.35, 0.13
	Linux kubernetes-upgrade-234842 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [055a7d1fd00e56803e6a94ba810931234ffc39f241f2c00f885ba4e26389d291] <==
	I1025 22:47:23.564500       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I1025 22:47:23.564520       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I1025 22:47:23.564531       1 storage_flowcontrol.go:186] APF bootstrap ensurer is exiting
	I1025 22:47:23.564542       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I1025 22:47:23.564554       1 controller.go:132] Ending legacy_token_tracking_controller
	I1025 22:47:23.564561       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1025 22:47:23.564571       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1025 22:47:23.564578       1 apiservice_controller.go:134] Shutting down APIServiceRegistrationController
	I1025 22:47:23.564589       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1025 22:47:23.566515       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1025 22:47:23.566617       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1025 22:47:23.567547       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1025 22:47:23.567960       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I1025 22:47:23.567945       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1025 22:47:23.568149       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 22:47:23.568450       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1025 22:47:23.568813       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1025 22:47:23.567938       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1025 22:47:23.568002       1 controller.go:157] Shutting down quota evaluator
	I1025 22:47:23.569389       1 controller.go:176] quota evaluator worker shutdown
	I1025 22:47:23.568135       1 secure_serving.go:258] Stopped listening on [::]:8443
	I1025 22:47:23.569425       1 controller.go:176] quota evaluator worker shutdown
	I1025 22:47:23.569433       1 controller.go:176] quota evaluator worker shutdown
	I1025 22:47:23.569443       1 controller.go:176] quota evaluator worker shutdown
	I1025 22:47:23.569447       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [c05de3e64c8739ef3c98a8c77d8ce77e13f27d723bcaa68e8d4a512e205d9382] <==
	I1025 22:47:29.852341       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 22:47:29.852504       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 22:47:29.852540       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 22:47:29.852809       1 shared_informer.go:320] Caches are synced for configmaps
	I1025 22:47:29.852867       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 22:47:29.852915       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 22:47:29.859648       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1025 22:47:29.866867       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1025 22:47:29.870009       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1025 22:47:29.873276       1 aggregator.go:171] initial CRD sync complete...
	I1025 22:47:29.873321       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 22:47:29.873345       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 22:47:29.873368       1 cache.go:39] Caches are synced for autoregister controller
	I1025 22:47:29.890500       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1025 22:47:29.890600       1 policy_source.go:224] refreshing policies
	I1025 22:47:29.965900       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 22:47:30.759284       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1025 22:47:31.478234       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.249]
	I1025 22:47:31.479770       1 controller.go:615] quota admission added evaluator for: endpoints
	I1025 22:47:31.490892       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 22:47:31.932906       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1025 22:47:31.953187       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1025 22:47:31.998023       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1025 22:47:32.030447       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 22:47:32.037278       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [d3c1bb1bc0d6ee961b8d838fe8ad84e0a4bfbae821d1fa974f107b70899928a9] <==
	
	
	==> kube-controller-manager [f358619968d2da00408069edf2611f84f868575f7165fa8fece8aef24a8c6886] <==
	I1025 22:47:33.213168       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1025 22:47:33.213227       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-234842"
	I1025 22:47:33.220688       1 shared_informer.go:320] Caches are synced for expand
	I1025 22:47:33.220801       1 shared_informer.go:320] Caches are synced for HPA
	I1025 22:47:33.222464       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1025 22:47:33.222608       1 shared_informer.go:320] Caches are synced for PVC protection
	I1025 22:47:33.222602       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="63.849µs"
	I1025 22:47:33.226379       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1025 22:47:33.226582       1 shared_informer.go:320] Caches are synced for persistent volume
	I1025 22:47:33.227635       1 shared_informer.go:320] Caches are synced for node
	I1025 22:47:33.227767       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1025 22:47:33.227855       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 22:47:33.227877       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1025 22:47:33.227883       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1025 22:47:33.227999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-234842"
	I1025 22:47:33.233319       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1025 22:47:33.233578       1 shared_informer.go:320] Caches are synced for cronjob
	I1025 22:47:33.271551       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1025 22:47:33.318317       1 shared_informer.go:320] Caches are synced for disruption
	I1025 22:47:33.325353       1 shared_informer.go:320] Caches are synced for resource quota
	I1025 22:47:33.380712       1 shared_informer.go:320] Caches are synced for resource quota
	I1025 22:47:33.421851       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1025 22:47:33.839727       1 shared_informer.go:320] Caches are synced for garbage collector
	I1025 22:47:33.839771       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 22:47:33.864366       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [b6aeb008ac0899969dd86e7e4427aedc845e1220d3c223f7b43bd249f85cb01d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1025 22:47:08.129070       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1025 22:47:14.999577       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E1025 22:47:14.999715       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 22:47:15.133434       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1025 22:47:15.133565       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 22:47:15.133652       1 server_linux.go:169] "Using iptables Proxier"
	I1025 22:47:15.137039       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 22:47:15.137560       1 server.go:483] "Version info" version="v1.31.1"
	I1025 22:47:15.137881       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 22:47:15.149160       1 config.go:199] "Starting service config controller"
	I1025 22:47:15.149306       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1025 22:47:15.149427       1 config.go:105] "Starting endpoint slice config controller"
	I1025 22:47:15.149515       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1025 22:47:15.151190       1 config.go:328] "Starting node config controller"
	I1025 22:47:15.151237       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1025 22:47:15.250702       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1025 22:47:15.252137       1 shared_informer.go:320] Caches are synced for service config
	I1025 22:47:15.252252       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f9c0588bda9d332a5932dd7f667218f4181c92091f4edbd2d6d26f1462774776] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1025 22:47:31.341967       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1025 22:47:31.372739       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.249"]
	E1025 22:47:31.372817       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 22:47:31.432717       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1025 22:47:31.432766       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1025 22:47:31.432796       1 server_linux.go:169] "Using iptables Proxier"
	I1025 22:47:31.437289       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 22:47:31.437639       1 server.go:483] "Version info" version="v1.31.1"
	I1025 22:47:31.437699       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 22:47:31.442454       1 config.go:199] "Starting service config controller"
	I1025 22:47:31.442585       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1025 22:47:31.442676       1 config.go:105] "Starting endpoint slice config controller"
	I1025 22:47:31.442716       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1025 22:47:31.443881       1 config.go:328] "Starting node config controller"
	I1025 22:47:31.443939       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1025 22:47:31.543274       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1025 22:47:31.543416       1 shared_informer.go:320] Caches are synced for service config
	I1025 22:47:31.544153       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6ee655c2b95666a0cb6d756afb5bed8ac8ec593e912aba8f396ee937c84bfe66] <==
	I1025 22:47:08.224230       1 serving.go:386] Generated self-signed cert in-memory
	W1025 22:47:14.901191       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 22:47:14.901596       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 22:47:14.901751       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 22:47:14.901873       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 22:47:14.970602       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1025 22:47:14.970731       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 22:47:14.973750       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1025 22:47:14.974001       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 22:47:14.974059       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 22:47:14.974188       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 22:47:15.075151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 22:47:23.391703       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1025 22:47:23.391871       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 22:47:23.392055       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1025 22:47:23.392523       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ec26aa28081a0848b4631cd82c1ca4b998b4f0e7fa95ee547a94dc089c5fe79a] <==
	I1025 22:47:28.088476       1 serving.go:386] Generated self-signed cert in-memory
	W1025 22:47:29.797720       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 22:47:29.797773       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 22:47:29.797787       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 22:47:29.797795       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 22:47:29.857440       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1025 22:47:29.857487       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 22:47:29.867408       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1025 22:47:29.867531       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 22:47:29.867600       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1025 22:47:29.867636       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 22:47:29.968543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 22:47:26 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:26.781638    3777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f7206da0a7c04043eac83a915eab48c-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-234842\" (UID: \"6f7206da0a7c04043eac83a915eab48c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-234842"
	Oct 25 22:47:26 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:26.781651    3777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f7206da0a7c04043eac83a915eab48c-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-234842\" (UID: \"6f7206da0a7c04043eac83a915eab48c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-234842"
	Oct 25 22:47:26 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:26.781667    3777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/dc85e3d522a6cbc01a3759274f3439e8-etcd-certs\") pod \"etcd-kubernetes-upgrade-234842\" (UID: \"dc85e3d522a6cbc01a3759274f3439e8\") " pod="kube-system/etcd-kubernetes-upgrade-234842"
	Oct 25 22:47:26 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:26.781685    3777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/141207779613f0bea9b8dca7f9f7a214-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-234842\" (UID: \"141207779613f0bea9b8dca7f9f7a214\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-234842"
	Oct 25 22:47:26 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:26.781712    3777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/141207779613f0bea9b8dca7f9f7a214-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-234842\" (UID: \"141207779613f0bea9b8dca7f9f7a214\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-234842"
	Oct 25 22:47:26 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:26.965520    3777 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-234842"
	Oct 25 22:47:26 kubernetes-upgrade-234842 kubelet[3777]: E1025 22:47:26.966717    3777 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.249:8443: connect: connection refused" node="kubernetes-upgrade-234842"
	Oct 25 22:47:27 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:27.048213    3777 scope.go:117] "RemoveContainer" containerID="055a7d1fd00e56803e6a94ba810931234ffc39f241f2c00f885ba4e26389d291"
	Oct 25 22:47:27 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:27.050395    3777 scope.go:117] "RemoveContainer" containerID="d3c1bb1bc0d6ee961b8d838fe8ad84e0a4bfbae821d1fa974f107b70899928a9"
	Oct 25 22:47:27 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:27.051547    3777 scope.go:117] "RemoveContainer" containerID="6ee655c2b95666a0cb6d756afb5bed8ac8ec593e912aba8f396ee937c84bfe66"
	Oct 25 22:47:27 kubernetes-upgrade-234842 kubelet[3777]: E1025 22:47:27.165278    3777 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-234842?timeout=10s\": dial tcp 192.168.39.249:8443: connect: connection refused" interval="800ms"
	Oct 25 22:47:27 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:27.368882    3777 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-234842"
	Oct 25 22:47:29 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:29.905854    3777 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-234842"
	Oct 25 22:47:29 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:29.906159    3777 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-234842"
	Oct 25 22:47:29 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:29.906190    3777 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 22:47:29 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:29.907461    3777 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 22:47:30 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:30.534542    3777 apiserver.go:52] "Watching apiserver"
	Oct 25 22:47:30 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:30.564965    3777 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 22:47:30 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:30.634136    3777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd-tmp\") pod \"storage-provisioner\" (UID: \"57fcc11a-5d37-4d8b-8e0e-cd2351cc76fd\") " pod="kube-system/storage-provisioner"
	Oct 25 22:47:30 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:30.634257    3777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7fb759f-d8e1-4879-9f57-7bd22856c380-lib-modules\") pod \"kube-proxy-s4r8h\" (UID: \"f7fb759f-d8e1-4879-9f57-7bd22856c380\") " pod="kube-system/kube-proxy-s4r8h"
	Oct 25 22:47:30 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:30.634309    3777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7fb759f-d8e1-4879-9f57-7bd22856c380-xtables-lock\") pod \"kube-proxy-s4r8h\" (UID: \"f7fb759f-d8e1-4879-9f57-7bd22856c380\") " pod="kube-system/kube-proxy-s4r8h"
	Oct 25 22:47:30 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:30.839744    3777 scope.go:117] "RemoveContainer" containerID="e903173ead59b483fd49bc261ea04684d88bdf9cc374e09f820bac87e07595d5"
	Oct 25 22:47:30 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:30.840404    3777 scope.go:117] "RemoveContainer" containerID="b9b1f91e4d882b8045aea2fe177130730eb7ef8d0f192482a6e19eeaf6889164"
	Oct 25 22:47:30 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:30.840766    3777 scope.go:117] "RemoveContainer" containerID="eb231f6bc57f2400df2646c7e3a8cc6dca9cf64d7cda1de46906f5e16f33411c"
	Oct 25 22:47:30 kubernetes-upgrade-234842 kubelet[3777]: I1025 22:47:30.841014    3777 scope.go:117] "RemoveContainer" containerID="b6aeb008ac0899969dd86e7e4427aedc845e1220d3c223f7b43bd249f85cb01d"
	
	
	==> storage-provisioner [c4cb2e1648824cb3ebe8b0bfde150cd73534227f0300e0c9074b946dc91b4415] <==
	I1025 22:47:31.141393       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 22:47:31.173952       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 22:47:31.174430       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [e903173ead59b483fd49bc261ea04684d88bdf9cc374e09f820bac87e07595d5] <==
	I1025 22:47:07.833728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 22:47:14.969924       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 22:47:14.970146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 22:47:15.016553       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 22:47:15.021197       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"899dd524-ced0-4605-803f-edcf8e7e06b9", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-234842_6da0dbe8-607b-4377-8cb4-97a4ec6ec4e3 became leader
	I1025 22:47:15.021785       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-234842_6da0dbe8-607b-4377-8cb4-97a4ec6ec4e3!
	I1025 22:47:15.122699       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-234842_6da0dbe8-607b-4377-8cb4-97a4ec6ec4e3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-234842 -n kubernetes-upgrade-234842
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-234842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-234842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-234842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-234842: (1.159007234s)
--- FAIL: TestKubernetesUpgrade (393.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (274.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-005932 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-005932 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m34.473561519s)

                                                
                                                
-- stdout --
	* [old-k8s-version-005932] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19758
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-005932" primary control-plane node in "old-k8s-version-005932" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:47:36.764798  720410 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:47:36.764919  720410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:47:36.764928  720410 out.go:358] Setting ErrFile to fd 2...
	I1025 22:47:36.764933  720410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:47:36.765139  720410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:47:36.765767  720410 out.go:352] Setting JSON to false
	I1025 22:47:36.766792  720410 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":19801,"bootTime":1729876656,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:47:36.766910  720410 start.go:139] virtualization: kvm guest
	I1025 22:47:36.769508  720410 out.go:177] * [old-k8s-version-005932] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:47:36.770875  720410 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 22:47:36.770923  720410 notify.go:220] Checking for updates...
	I1025 22:47:36.773385  720410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:47:36.774744  720410 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:47:36.775999  720410 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:47:36.777292  720410 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:47:36.778569  720410 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:47:36.780415  720410 config.go:182] Loaded profile config "bridge-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:47:36.780567  720410 config.go:182] Loaded profile config "embed-certs-601894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:47:36.780695  720410 config.go:182] Loaded profile config "flannel-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:47:36.780825  720410 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 22:47:36.817127  720410 out.go:177] * Using the kvm2 driver based on user configuration
	I1025 22:47:36.818480  720410 start.go:297] selected driver: kvm2
	I1025 22:47:36.818495  720410 start.go:901] validating driver "kvm2" against <nil>
	I1025 22:47:36.818509  720410 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:47:36.819226  720410 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:47:36.819332  720410 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 22:47:36.835321  720410 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 22:47:36.835390  720410 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 22:47:36.835634  720410 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 22:47:36.835674  720410 cni.go:84] Creating CNI manager for ""
	I1025 22:47:36.835768  720410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:47:36.835787  720410 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 22:47:36.835854  720410 start.go:340] cluster config:
	{Name:old-k8s-version-005932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-005932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:47:36.835967  720410 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:47:36.838065  720410 out.go:177] * Starting "old-k8s-version-005932" primary control-plane node in "old-k8s-version-005932" cluster
	I1025 22:47:36.839250  720410 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1025 22:47:36.839286  720410 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1025 22:47:36.839296  720410 cache.go:56] Caching tarball of preloaded images
	I1025 22:47:36.839396  720410 preload.go:172] Found /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 22:47:36.839410  720410 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1025 22:47:36.839504  720410 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/config.json ...
	I1025 22:47:36.839523  720410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/config.json: {Name:mkac4d3c9eb41287c9fdbb20f7d855858d130a4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:47:36.839675  720410 start.go:360] acquireMachinesLock for old-k8s-version-005932: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 22:47:40.273868  720410 start.go:364] duration metric: took 3.434155533s to acquireMachinesLock for "old-k8s-version-005932"
	I1025 22:47:40.273930  720410 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-005932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-005932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:47:40.274021  720410 start.go:125] createHost starting for "" (driver="kvm2")
	I1025 22:47:40.276993  720410 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 22:47:40.277169  720410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:47:40.277233  720410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:47:40.294603  720410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
	I1025 22:47:40.295069  720410 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:47:40.295811  720410 main.go:141] libmachine: Using API Version  1
	I1025 22:47:40.295867  720410 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:47:40.296324  720410 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:47:40.296551  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetMachineName
	I1025 22:47:40.296702  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:47:40.296846  720410 start.go:159] libmachine.API.Create for "old-k8s-version-005932" (driver="kvm2")
	I1025 22:47:40.296881  720410 client.go:168] LocalClient.Create starting
	I1025 22:47:40.296915  720410 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem
	I1025 22:47:40.296948  720410 main.go:141] libmachine: Decoding PEM data...
	I1025 22:47:40.296980  720410 main.go:141] libmachine: Parsing certificate...
	I1025 22:47:40.297065  720410 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem
	I1025 22:47:40.297101  720410 main.go:141] libmachine: Decoding PEM data...
	I1025 22:47:40.297121  720410 main.go:141] libmachine: Parsing certificate...
	I1025 22:47:40.297143  720410 main.go:141] libmachine: Running pre-create checks...
	I1025 22:47:40.297154  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .PreCreateCheck
	I1025 22:47:40.297573  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetConfigRaw
	I1025 22:47:40.298000  720410 main.go:141] libmachine: Creating machine...
	I1025 22:47:40.298015  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .Create
	I1025 22:47:40.298178  720410 main.go:141] libmachine: (old-k8s-version-005932) creating KVM machine...
	I1025 22:47:40.298200  720410 main.go:141] libmachine: (old-k8s-version-005932) creating network...
	I1025 22:47:40.299522  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found existing default KVM network
	I1025 22:47:40.301362  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:40.301204  720476 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015910}
	I1025 22:47:40.301418  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | created network xml: 
	I1025 22:47:40.301437  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | <network>
	I1025 22:47:40.301452  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG |   <name>mk-old-k8s-version-005932</name>
	I1025 22:47:40.301468  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG |   <dns enable='no'/>
	I1025 22:47:40.301480  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG |   
	I1025 22:47:40.301489  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1025 22:47:40.301500  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG |     <dhcp>
	I1025 22:47:40.301512  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1025 22:47:40.301535  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG |     </dhcp>
	I1025 22:47:40.301555  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG |   </ip>
	I1025 22:47:40.301573  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG |   
	I1025 22:47:40.301581  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | </network>
	I1025 22:47:40.301598  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | 
	I1025 22:47:40.307266  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | trying to create private KVM network mk-old-k8s-version-005932 192.168.39.0/24...
	I1025 22:47:40.386912  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | private KVM network mk-old-k8s-version-005932 192.168.39.0/24 created
	I1025 22:47:40.386942  720410 main.go:141] libmachine: (old-k8s-version-005932) setting up store path in /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932 ...
	I1025 22:47:40.386964  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:40.386877  720476 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:47:40.386977  720410 main.go:141] libmachine: (old-k8s-version-005932) building disk image from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1025 22:47:40.387082  720410 main.go:141] libmachine: (old-k8s-version-005932) Downloading /home/jenkins/minikube-integration/19758-661979/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso...
	I1025 22:47:40.689164  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:40.689044  720476 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa...
	I1025 22:47:40.814258  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:40.814120  720476 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/old-k8s-version-005932.rawdisk...
	I1025 22:47:40.814289  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | Writing magic tar header
	I1025 22:47:40.814302  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | Writing SSH key tar header
	I1025 22:47:40.814312  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:40.814236  720476 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932 ...
	I1025 22:47:40.814329  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932
	I1025 22:47:40.814415  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube/machines
	I1025 22:47:40.814470  720410 main.go:141] libmachine: (old-k8s-version-005932) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932 (perms=drwx------)
	I1025 22:47:40.814480  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:47:40.814512  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | checking permissions on dir: /home/jenkins/minikube-integration/19758-661979
	I1025 22:47:40.814523  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1025 22:47:40.814535  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | checking permissions on dir: /home/jenkins
	I1025 22:47:40.814548  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | checking permissions on dir: /home
	I1025 22:47:40.814560  720410 main.go:141] libmachine: (old-k8s-version-005932) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube/machines (perms=drwxr-xr-x)
	I1025 22:47:40.814577  720410 main.go:141] libmachine: (old-k8s-version-005932) setting executable bit set on /home/jenkins/minikube-integration/19758-661979/.minikube (perms=drwxr-xr-x)
	I1025 22:47:40.814587  720410 main.go:141] libmachine: (old-k8s-version-005932) setting executable bit set on /home/jenkins/minikube-integration/19758-661979 (perms=drwxrwxr-x)
	I1025 22:47:40.814598  720410 main.go:141] libmachine: (old-k8s-version-005932) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1025 22:47:40.814606  720410 main.go:141] libmachine: (old-k8s-version-005932) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1025 22:47:40.814618  720410 main.go:141] libmachine: (old-k8s-version-005932) creating domain...
	I1025 22:47:40.814635  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | skipping /home - not owner
	I1025 22:47:40.815853  720410 main.go:141] libmachine: (old-k8s-version-005932) define libvirt domain using xml: 
	I1025 22:47:40.815884  720410 main.go:141] libmachine: (old-k8s-version-005932) <domain type='kvm'>
	I1025 22:47:40.815895  720410 main.go:141] libmachine: (old-k8s-version-005932)   <name>old-k8s-version-005932</name>
	I1025 22:47:40.815902  720410 main.go:141] libmachine: (old-k8s-version-005932)   <memory unit='MiB'>2200</memory>
	I1025 22:47:40.815911  720410 main.go:141] libmachine: (old-k8s-version-005932)   <vcpu>2</vcpu>
	I1025 22:47:40.815924  720410 main.go:141] libmachine: (old-k8s-version-005932)   <features>
	I1025 22:47:40.815932  720410 main.go:141] libmachine: (old-k8s-version-005932)     <acpi/>
	I1025 22:47:40.815939  720410 main.go:141] libmachine: (old-k8s-version-005932)     <apic/>
	I1025 22:47:40.815955  720410 main.go:141] libmachine: (old-k8s-version-005932)     <pae/>
	I1025 22:47:40.815965  720410 main.go:141] libmachine: (old-k8s-version-005932)     
	I1025 22:47:40.815981  720410 main.go:141] libmachine: (old-k8s-version-005932)   </features>
	I1025 22:47:40.815992  720410 main.go:141] libmachine: (old-k8s-version-005932)   <cpu mode='host-passthrough'>
	I1025 22:47:40.816000  720410 main.go:141] libmachine: (old-k8s-version-005932)   
	I1025 22:47:40.816008  720410 main.go:141] libmachine: (old-k8s-version-005932)   </cpu>
	I1025 22:47:40.816020  720410 main.go:141] libmachine: (old-k8s-version-005932)   <os>
	I1025 22:47:40.816031  720410 main.go:141] libmachine: (old-k8s-version-005932)     <type>hvm</type>
	I1025 22:47:40.816062  720410 main.go:141] libmachine: (old-k8s-version-005932)     <boot dev='cdrom'/>
	I1025 22:47:40.816087  720410 main.go:141] libmachine: (old-k8s-version-005932)     <boot dev='hd'/>
	I1025 22:47:40.816098  720410 main.go:141] libmachine: (old-k8s-version-005932)     <bootmenu enable='no'/>
	I1025 22:47:40.816105  720410 main.go:141] libmachine: (old-k8s-version-005932)   </os>
	I1025 22:47:40.816116  720410 main.go:141] libmachine: (old-k8s-version-005932)   <devices>
	I1025 22:47:40.816125  720410 main.go:141] libmachine: (old-k8s-version-005932)     <disk type='file' device='cdrom'>
	I1025 22:47:40.816140  720410 main.go:141] libmachine: (old-k8s-version-005932)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/boot2docker.iso'/>
	I1025 22:47:40.816151  720410 main.go:141] libmachine: (old-k8s-version-005932)       <target dev='hdc' bus='scsi'/>
	I1025 22:47:40.816159  720410 main.go:141] libmachine: (old-k8s-version-005932)       <readonly/>
	I1025 22:47:40.816167  720410 main.go:141] libmachine: (old-k8s-version-005932)     </disk>
	I1025 22:47:40.816177  720410 main.go:141] libmachine: (old-k8s-version-005932)     <disk type='file' device='disk'>
	I1025 22:47:40.816188  720410 main.go:141] libmachine: (old-k8s-version-005932)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1025 22:47:40.816202  720410 main.go:141] libmachine: (old-k8s-version-005932)       <source file='/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/old-k8s-version-005932.rawdisk'/>
	I1025 22:47:40.816213  720410 main.go:141] libmachine: (old-k8s-version-005932)       <target dev='hda' bus='virtio'/>
	I1025 22:47:40.816227  720410 main.go:141] libmachine: (old-k8s-version-005932)     </disk>
	I1025 22:47:40.816237  720410 main.go:141] libmachine: (old-k8s-version-005932)     <interface type='network'>
	I1025 22:47:40.816251  720410 main.go:141] libmachine: (old-k8s-version-005932)       <source network='mk-old-k8s-version-005932'/>
	I1025 22:47:40.816262  720410 main.go:141] libmachine: (old-k8s-version-005932)       <model type='virtio'/>
	I1025 22:47:40.816271  720410 main.go:141] libmachine: (old-k8s-version-005932)     </interface>
	I1025 22:47:40.816282  720410 main.go:141] libmachine: (old-k8s-version-005932)     <interface type='network'>
	I1025 22:47:40.816307  720410 main.go:141] libmachine: (old-k8s-version-005932)       <source network='default'/>
	I1025 22:47:40.816318  720410 main.go:141] libmachine: (old-k8s-version-005932)       <model type='virtio'/>
	I1025 22:47:40.816330  720410 main.go:141] libmachine: (old-k8s-version-005932)     </interface>
	I1025 22:47:40.816338  720410 main.go:141] libmachine: (old-k8s-version-005932)     <serial type='pty'>
	I1025 22:47:40.816353  720410 main.go:141] libmachine: (old-k8s-version-005932)       <target port='0'/>
	I1025 22:47:40.816362  720410 main.go:141] libmachine: (old-k8s-version-005932)     </serial>
	I1025 22:47:40.816373  720410 main.go:141] libmachine: (old-k8s-version-005932)     <console type='pty'>
	I1025 22:47:40.816381  720410 main.go:141] libmachine: (old-k8s-version-005932)       <target type='serial' port='0'/>
	I1025 22:47:40.816393  720410 main.go:141] libmachine: (old-k8s-version-005932)     </console>
	I1025 22:47:40.816402  720410 main.go:141] libmachine: (old-k8s-version-005932)     <rng model='virtio'>
	I1025 22:47:40.816412  720410 main.go:141] libmachine: (old-k8s-version-005932)       <backend model='random'>/dev/random</backend>
	I1025 22:47:40.816421  720410 main.go:141] libmachine: (old-k8s-version-005932)     </rng>
	I1025 22:47:40.816429  720410 main.go:141] libmachine: (old-k8s-version-005932)     
	I1025 22:47:40.816443  720410 main.go:141] libmachine: (old-k8s-version-005932)     
	I1025 22:47:40.816484  720410 main.go:141] libmachine: (old-k8s-version-005932)   </devices>
	I1025 22:47:40.816503  720410 main.go:141] libmachine: (old-k8s-version-005932) </domain>
	I1025 22:47:40.816516  720410 main.go:141] libmachine: (old-k8s-version-005932) 
	I1025 22:47:40.820979  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:f5:a5:aa in network default
	I1025 22:47:40.821805  720410 main.go:141] libmachine: (old-k8s-version-005932) starting domain...
	I1025 22:47:40.821826  720410 main.go:141] libmachine: (old-k8s-version-005932) ensuring networks are active...
	I1025 22:47:40.821838  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:40.822951  720410 main.go:141] libmachine: (old-k8s-version-005932) Ensuring network default is active
	I1025 22:47:40.823098  720410 main.go:141] libmachine: (old-k8s-version-005932) Ensuring network mk-old-k8s-version-005932 is active
	I1025 22:47:40.824006  720410 main.go:141] libmachine: (old-k8s-version-005932) getting domain XML...
	I1025 22:47:40.824923  720410 main.go:141] libmachine: (old-k8s-version-005932) creating domain...
	I1025 22:47:42.212663  720410 main.go:141] libmachine: (old-k8s-version-005932) waiting for IP...
	I1025 22:47:42.215004  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:42.215587  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:42.215656  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:42.215575  720476 retry.go:31] will retry after 276.667921ms: waiting for domain to come up
	I1025 22:47:42.494253  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:42.495019  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:42.495050  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:42.494982  720476 retry.go:31] will retry after 239.8324ms: waiting for domain to come up
	I1025 22:47:42.736708  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:42.737282  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:42.737310  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:42.737266  720476 retry.go:31] will retry after 307.834847ms: waiting for domain to come up
	I1025 22:47:43.047020  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:43.047709  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:43.047742  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:43.047661  720476 retry.go:31] will retry after 381.957222ms: waiting for domain to come up
	I1025 22:47:43.431400  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:43.432016  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:43.432083  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:43.431998  720476 retry.go:31] will retry after 737.569257ms: waiting for domain to come up
	I1025 22:47:44.170955  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:44.171598  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:44.171630  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:44.171533  720476 retry.go:31] will retry after 802.64843ms: waiting for domain to come up
	I1025 22:47:44.975646  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:44.976103  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:44.976167  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:44.976079  720476 retry.go:31] will retry after 1.153314628s: waiting for domain to come up
	I1025 22:47:46.131163  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:46.131625  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:46.131657  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:46.131601  720476 retry.go:31] will retry after 1.378906761s: waiting for domain to come up
	I1025 22:47:47.512200  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:47.512693  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:47.512720  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:47.512681  720476 retry.go:31] will retry after 1.292779227s: waiting for domain to come up
	I1025 22:47:48.806687  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:48.807277  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:48.807308  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:48.807252  720476 retry.go:31] will retry after 2.280599669s: waiting for domain to come up
	I1025 22:47:51.089293  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:51.090012  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:51.090042  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:51.089911  720476 retry.go:31] will retry after 1.978378219s: waiting for domain to come up
	I1025 22:47:53.070121  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:53.070652  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:53.070686  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:53.070620  720476 retry.go:31] will retry after 2.687850352s: waiting for domain to come up
	I1025 22:47:55.759911  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:55.760543  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:55.760577  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:55.760515  720476 retry.go:31] will retry after 3.249637593s: waiting for domain to come up
	I1025 22:47:59.153036  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:47:59.153845  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:47:59.153975  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:47:59.153827  720476 retry.go:31] will retry after 3.581762967s: waiting for domain to come up
	I1025 22:48:02.738207  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:02.738775  720410 main.go:141] libmachine: (old-k8s-version-005932) found domain IP: 192.168.39.215
	I1025 22:48:02.738821  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has current primary IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:02.738835  720410 main.go:141] libmachine: (old-k8s-version-005932) reserving static IP address...
	I1025 22:48:02.739148  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-005932", mac: "52:54:00:fd:66:94", ip: "192.168.39.215"} in network mk-old-k8s-version-005932
	I1025 22:48:02.818785  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | Getting to WaitForSSH function...
	I1025 22:48:02.818819  720410 main.go:141] libmachine: (old-k8s-version-005932) reserved static IP address 192.168.39.215 for domain old-k8s-version-005932
	I1025 22:48:02.818834  720410 main.go:141] libmachine: (old-k8s-version-005932) waiting for SSH...
	I1025 22:48:02.822044  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:02.822493  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:02.822520  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:02.822715  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | Using SSH client type: external
	I1025 22:48:02.822766  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | Using SSH private key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa (-rw-------)
	I1025 22:48:02.822802  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 22:48:02.822820  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | About to run SSH command:
	I1025 22:48:02.822835  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | exit 0
	I1025 22:48:02.957071  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | SSH cmd err, output: <nil>: 
	I1025 22:48:02.957297  720410 main.go:141] libmachine: (old-k8s-version-005932) KVM machine creation complete
	I1025 22:48:02.957652  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetConfigRaw
	I1025 22:48:02.958242  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:48:02.958441  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:48:02.958634  720410 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1025 22:48:02.958647  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetState
	I1025 22:48:02.960012  720410 main.go:141] libmachine: Detecting operating system of created instance...
	I1025 22:48:02.960028  720410 main.go:141] libmachine: Waiting for SSH to be available...
	I1025 22:48:02.960035  720410 main.go:141] libmachine: Getting to WaitForSSH function...
	I1025 22:48:02.960044  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:02.962569  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:02.962946  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:02.962983  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:02.963125  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:02.963285  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:02.963479  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:02.963659  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:02.963849  720410 main.go:141] libmachine: Using SSH client type: native
	I1025 22:48:02.964102  720410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:48:02.964117  720410 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1025 22:48:03.076511  720410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:48:03.076535  720410 main.go:141] libmachine: Detecting the provisioner...
	I1025 22:48:03.076544  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:03.079865  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.080309  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:03.080342  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.080451  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:03.080665  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:03.080867  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:03.081067  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:03.081283  720410 main.go:141] libmachine: Using SSH client type: native
	I1025 22:48:03.081508  720410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:48:03.081525  720410 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1025 22:48:03.202339  720410 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1025 22:48:03.202472  720410 main.go:141] libmachine: found compatible host: buildroot
	I1025 22:48:03.202489  720410 main.go:141] libmachine: Provisioning with buildroot...
	I1025 22:48:03.202503  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetMachineName
	I1025 22:48:03.202827  720410 buildroot.go:166] provisioning hostname "old-k8s-version-005932"
	I1025 22:48:03.202860  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetMachineName
	I1025 22:48:03.203078  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:03.206236  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.206763  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:03.206807  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.206972  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:03.207151  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:03.207371  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:03.207534  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:03.207737  720410 main.go:141] libmachine: Using SSH client type: native
	I1025 22:48:03.207964  720410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:48:03.207982  720410 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-005932 && echo "old-k8s-version-005932" | sudo tee /etc/hostname
	I1025 22:48:03.337255  720410 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-005932
	
	I1025 22:48:03.337293  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:03.340814  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.341338  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:03.341370  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.341595  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:03.341860  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:03.342070  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:03.342316  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:03.342535  720410 main.go:141] libmachine: Using SSH client type: native
	I1025 22:48:03.342770  720410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:48:03.342811  720410 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-005932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-005932/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-005932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 22:48:03.470720  720410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:48:03.470754  720410 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19758-661979/.minikube CaCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19758-661979/.minikube}
	I1025 22:48:03.470774  720410 buildroot.go:174] setting up certificates
	I1025 22:48:03.470786  720410 provision.go:84] configureAuth start
	I1025 22:48:03.470795  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetMachineName
	I1025 22:48:03.471068  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetIP
	I1025 22:48:03.474161  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.474664  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:03.474691  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.474908  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:03.477526  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.477941  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:03.477961  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:03.478133  720410 provision.go:143] copyHostCerts
	I1025 22:48:03.478212  720410 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem, removing ...
	I1025 22:48:03.478227  720410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem
	I1025 22:48:03.478291  720410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem (1082 bytes)
	I1025 22:48:03.478424  720410 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem, removing ...
	I1025 22:48:03.478436  720410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem
	I1025 22:48:03.478476  720410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem (1123 bytes)
	I1025 22:48:03.478567  720410 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem, removing ...
	I1025 22:48:03.478579  720410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem
	I1025 22:48:03.478608  720410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem (1675 bytes)
	I1025 22:48:03.478702  720410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-005932 san=[127.0.0.1 192.168.39.215 localhost minikube old-k8s-version-005932]
	I1025 22:48:04.226796  720410 provision.go:177] copyRemoteCerts
	I1025 22:48:04.226859  720410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 22:48:04.226886  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:04.229745  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.230164  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:04.230197  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.230408  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:04.230631  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:04.230841  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:04.231002  720410 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa Username:docker}
	I1025 22:48:04.324599  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 22:48:04.352065  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 22:48:04.379612  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 22:48:04.411343  720410 provision.go:87] duration metric: took 940.540579ms to configureAuth
	I1025 22:48:04.411380  720410 buildroot.go:189] setting minikube options for container-runtime
	I1025 22:48:04.411597  720410 config.go:182] Loaded profile config "old-k8s-version-005932": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1025 22:48:04.411722  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:04.415074  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.415442  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:04.415475  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.415598  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:04.415804  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:04.416038  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:04.416191  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:04.416352  720410 main.go:141] libmachine: Using SSH client type: native
	I1025 22:48:04.416521  720410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:48:04.416538  720410 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 22:48:04.663703  720410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 22:48:04.663734  720410 main.go:141] libmachine: Checking connection to Docker...
	I1025 22:48:04.663745  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetURL
	I1025 22:48:04.665176  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | using libvirt version 6000000
	I1025 22:48:04.667421  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.667839  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:04.667862  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.668191  720410 main.go:141] libmachine: Docker is up and running!
	I1025 22:48:04.668229  720410 main.go:141] libmachine: Reticulating splines...
	I1025 22:48:04.668238  720410 client.go:171] duration metric: took 24.37134628s to LocalClient.Create
	I1025 22:48:04.668264  720410 start.go:167] duration metric: took 24.371419777s to libmachine.API.Create "old-k8s-version-005932"
	I1025 22:48:04.668275  720410 start.go:293] postStartSetup for "old-k8s-version-005932" (driver="kvm2")
	I1025 22:48:04.668285  720410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 22:48:04.668304  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:48:04.668597  720410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 22:48:04.668644  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:04.671270  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.671751  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:04.671794  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.671874  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:04.672086  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:04.672266  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:04.672427  720410 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa Username:docker}
	I1025 22:48:04.767813  720410 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 22:48:04.772516  720410 info.go:137] Remote host: Buildroot 2023.02.9
	I1025 22:48:04.772545  720410 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/addons for local assets ...
	I1025 22:48:04.772621  720410 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/files for local assets ...
	I1025 22:48:04.772737  720410 filesync.go:149] local asset: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem -> 6691772.pem in /etc/ssl/certs
	I1025 22:48:04.772868  720410 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 22:48:04.783603  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:48:04.813896  720410 start.go:296] duration metric: took 145.603352ms for postStartSetup
	I1025 22:48:04.813969  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetConfigRaw
	I1025 22:48:04.814671  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetIP
	I1025 22:48:04.817546  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.817992  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:04.818026  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.818299  720410 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/config.json ...
	I1025 22:48:04.818498  720410 start.go:128] duration metric: took 24.544461525s to createHost
	I1025 22:48:04.818528  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:04.821268  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.821705  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:04.821738  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.821844  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:04.822038  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:04.822349  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:04.822642  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:04.822866  720410 main.go:141] libmachine: Using SSH client type: native
	I1025 22:48:04.823059  720410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:48:04.823073  720410 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 22:48:04.946257  720410 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729896484.915487223
	
	I1025 22:48:04.946285  720410 fix.go:216] guest clock: 1729896484.915487223
	I1025 22:48:04.946295  720410 fix.go:229] Guest: 2024-10-25 22:48:04.915487223 +0000 UTC Remote: 2024-10-25 22:48:04.818514507 +0000 UTC m=+28.093023756 (delta=96.972716ms)
	I1025 22:48:04.946347  720410 fix.go:200] guest clock delta is within tolerance: 96.972716ms
	I1025 22:48:04.946357  720410 start.go:83] releasing machines lock for "old-k8s-version-005932", held for 24.672459301s
	I1025 22:48:04.946401  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:48:04.946712  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetIP
	I1025 22:48:04.950006  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.950419  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:04.950451  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.950580  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:48:04.951155  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:48:04.951349  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:48:04.951440  720410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 22:48:04.951505  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:04.951614  720410 ssh_runner.go:195] Run: cat /version.json
	I1025 22:48:04.951643  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:48:04.954492  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.954781  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.954911  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:04.954956  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.955043  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:04.955128  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:04.955149  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:04.955189  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:04.955314  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:48:04.955377  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:04.955512  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:48:04.955587  720410 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa Username:docker}
	I1025 22:48:04.955658  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:48:04.955810  720410 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa Username:docker}
	I1025 22:48:05.065371  720410 ssh_runner.go:195] Run: systemctl --version
	I1025 22:48:05.074598  720410 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 22:48:05.246509  720410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 22:48:05.253803  720410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 22:48:05.253883  720410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 22:48:05.271142  720410 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 22:48:05.271182  720410 start.go:495] detecting cgroup driver to use...
	I1025 22:48:05.271266  720410 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 22:48:05.293511  720410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 22:48:05.310082  720410 docker.go:217] disabling cri-docker service (if available) ...
	I1025 22:48:05.310153  720410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 22:48:05.327016  720410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 22:48:05.342097  720410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 22:48:05.488441  720410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 22:48:05.668508  720410 docker.go:233] disabling docker service ...
	I1025 22:48:05.668598  720410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 22:48:05.689639  720410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 22:48:05.707617  720410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 22:48:05.888914  720410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 22:48:06.057330  720410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 22:48:06.076889  720410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:48:06.108396  720410 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1025 22:48:06.108477  720410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:48:06.122878  720410 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 22:48:06.122960  720410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:48:06.136877  720410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:48:06.151587  720410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:48:06.164939  720410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 22:48:06.181180  720410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 22:48:06.195160  720410 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 22:48:06.195240  720410 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 22:48:06.214393  720410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 22:48:06.226225  720410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:48:06.379416  720410 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 22:48:06.481060  720410 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 22:48:06.481171  720410 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 22:48:06.487758  720410 start.go:563] Will wait 60s for crictl version
	I1025 22:48:06.487834  720410 ssh_runner.go:195] Run: which crictl
	I1025 22:48:06.492973  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 22:48:06.543782  720410 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 22:48:06.543883  720410 ssh_runner.go:195] Run: crio --version
	I1025 22:48:06.575072  720410 ssh_runner.go:195] Run: crio --version
	I1025 22:48:06.612758  720410 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1025 22:48:06.614087  720410 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetIP
	I1025 22:48:06.618134  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:06.618644  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:47:57 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:48:06.618683  720410 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:48:06.618889  720410 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 22:48:06.623913  720410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:48:06.644160  720410 kubeadm.go:883] updating cluster {Name:old-k8s-version-005932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-005932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 22:48:06.644305  720410 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1025 22:48:06.644364  720410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:48:06.680786  720410 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1025 22:48:06.680895  720410 ssh_runner.go:195] Run: which lz4
	I1025 22:48:06.685318  720410 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 22:48:06.689952  720410 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 22:48:06.689983  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1025 22:48:08.461146  720410 crio.go:462] duration metric: took 1.775859012s to copy over tarball
	I1025 22:48:08.461266  720410 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 22:48:11.168929  720410 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.707616501s)
	I1025 22:48:11.168974  720410 crio.go:469] duration metric: took 2.70778646s to extract the tarball
	I1025 22:48:11.168986  720410 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 22:48:11.211061  720410 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:48:11.255609  720410 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1025 22:48:11.255647  720410 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 22:48:11.255723  720410 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:48:11.255759  720410 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:48:11.255772  720410 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1025 22:48:11.255778  720410 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1025 22:48:11.255729  720410 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:48:11.255758  720410 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1025 22:48:11.255882  720410 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:48:11.255742  720410 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:48:11.257547  720410 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1025 22:48:11.257555  720410 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:48:11.257578  720410 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1025 22:48:11.257562  720410 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:48:11.257595  720410 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:48:11.257547  720410 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:48:11.257638  720410 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:48:11.257671  720410 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1025 22:48:11.417381  720410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:48:11.423430  720410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:48:11.436740  720410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:48:11.450247  720410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1025 22:48:11.477021  720410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1025 22:48:11.486123  720410 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1025 22:48:11.486171  720410 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:48:11.486219  720410 ssh_runner.go:195] Run: which crictl
	I1025 22:48:11.487402  720410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1025 22:48:11.494040  720410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:48:11.508588  720410 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1025 22:48:11.508651  720410 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:48:11.508703  720410 ssh_runner.go:195] Run: which crictl
	I1025 22:48:11.556732  720410 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1025 22:48:11.556781  720410 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:48:11.556836  720410 ssh_runner.go:195] Run: which crictl
	I1025 22:48:11.562713  720410 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1025 22:48:11.562764  720410 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1025 22:48:11.562816  720410 ssh_runner.go:195] Run: which crictl
	I1025 22:48:11.615205  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:48:11.615218  720410 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1025 22:48:11.615277  720410 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1025 22:48:11.615321  720410 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:48:11.615358  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:48:11.615373  720410 ssh_runner.go:195] Run: which crictl
	I1025 22:48:11.615389  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:48:11.615292  720410 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1025 22:48:11.615442  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1025 22:48:11.615452  720410 ssh_runner.go:195] Run: which crictl
	I1025 22:48:11.615225  720410 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1025 22:48:11.615508  720410 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1025 22:48:11.615535  720410 ssh_runner.go:195] Run: which crictl
	I1025 22:48:11.717496  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:48:11.730654  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:48:11.730664  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:48:11.730731  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1025 22:48:11.730808  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:48:11.734507  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 22:48:11.734509  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1025 22:48:11.809009  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:48:11.879006  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1025 22:48:11.895038  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:48:11.901007  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:48:11.901146  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:48:11.904639  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 22:48:11.904647  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1025 22:48:11.976839  720410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1025 22:48:12.019770  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1025 22:48:12.060033  720410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1025 22:48:12.060117  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:48:12.060141  720410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1025 22:48:12.083889  720410 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 22:48:12.083936  720410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1025 22:48:12.118277  720410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1025 22:48:12.131674  720410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1025 22:48:12.144797  720410 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1025 22:48:12.422202  720410 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:48:12.566436  720410 cache_images.go:92] duration metric: took 1.310765759s to LoadCachedImages
	W1025 22:48:12.566533  720410 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1025 22:48:12.566552  720410 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.20.0 crio true true} ...
	I1025 22:48:12.566692  720410 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-005932 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-005932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 22:48:12.566781  720410 ssh_runner.go:195] Run: crio config
	I1025 22:48:12.618477  720410 cni.go:84] Creating CNI manager for ""
	I1025 22:48:12.618510  720410 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:48:12.618524  720410 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 22:48:12.618554  720410 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-005932 NodeName:old-k8s-version-005932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 22:48:12.618743  720410 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-005932"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 22:48:12.618820  720410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1025 22:48:12.629775  720410 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 22:48:12.629860  720410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 22:48:12.640404  720410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1025 22:48:12.657684  720410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 22:48:12.676828  720410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1025 22:48:12.696552  720410 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1025 22:48:12.701952  720410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:48:12.714622  720410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:48:12.846997  720410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:48:12.863720  720410 certs.go:68] Setting up /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932 for IP: 192.168.39.215
	I1025 22:48:12.863753  720410 certs.go:194] generating shared ca certs ...
	I1025 22:48:12.863778  720410 certs.go:226] acquiring lock for ca certs: {Name:mkb3c0b02dcf77d22d8e250175d24731a46db4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:48:12.863955  720410 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key
	I1025 22:48:12.864025  720410 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key
	I1025 22:48:12.864039  720410 certs.go:256] generating profile certs ...
	I1025 22:48:12.864111  720410 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/client.key
	I1025 22:48:12.864130  720410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/client.crt with IP's: []
	I1025 22:48:13.120690  720410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/client.crt ...
	I1025 22:48:13.120724  720410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/client.crt: {Name:mkb25e4b27898e24fd6f06069cb6f8fcdb024907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:48:13.120924  720410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/client.key ...
	I1025 22:48:13.120943  720410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/client.key: {Name:mkcfd190b90697daed4d642707174145f6f920ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:48:13.121078  720410 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.key.fb60c9ca
	I1025 22:48:13.121109  720410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.crt.fb60c9ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215]
	I1025 22:48:13.300666  720410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.crt.fb60c9ca ...
	I1025 22:48:13.300698  720410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.crt.fb60c9ca: {Name:mk4ebe75859114a4b51b5b243dc769b64c816470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:48:13.300898  720410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.key.fb60c9ca ...
	I1025 22:48:13.300919  720410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.key.fb60c9ca: {Name:mk069f1ab7484d0be783774fd4ec86c8ac483662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:48:13.301046  720410 certs.go:381] copying /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.crt.fb60c9ca -> /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.crt
	I1025 22:48:13.301148  720410 certs.go:385] copying /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.key.fb60c9ca -> /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.key
	I1025 22:48:13.301261  720410 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.key
	I1025 22:48:13.301288  720410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.crt with IP's: []
	I1025 22:48:13.457985  720410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.crt ...
	I1025 22:48:13.458023  720410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.crt: {Name:mkde74935c30cd4c94113a0ee5df5e268dea234c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:48:13.540569  720410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.key ...
	I1025 22:48:13.540622  720410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.key: {Name:mk5addb08ece68ac0b4420267af1dd44d9b88e34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:48:13.540925  720410 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem (1338 bytes)
	W1025 22:48:13.541064  720410 certs.go:480] ignoring /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177_empty.pem, impossibly tiny 0 bytes
	I1025 22:48:13.541083  720410 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 22:48:13.541116  720410 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem (1082 bytes)
	I1025 22:48:13.541147  720410 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem (1123 bytes)
	I1025 22:48:13.541177  720410 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem (1675 bytes)
	I1025 22:48:13.541228  720410 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:48:13.542093  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 22:48:13.570043  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 22:48:13.594232  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 22:48:13.619309  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 22:48:13.643913  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 22:48:13.683519  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 22:48:13.712588  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 22:48:13.742598  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 22:48:13.776715  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 22:48:13.811704  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem --> /usr/share/ca-certificates/669177.pem (1338 bytes)
	I1025 22:48:13.839609  720410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /usr/share/ca-certificates/6691772.pem (1708 bytes)
	I1025 22:48:13.865878  720410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 22:48:13.884273  720410 ssh_runner.go:195] Run: openssl version
	I1025 22:48:13.890795  720410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669177.pem && ln -fs /usr/share/ca-certificates/669177.pem /etc/ssl/certs/669177.pem"
	I1025 22:48:13.903784  720410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669177.pem
	I1025 22:48:13.908843  720410 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:46 /usr/share/ca-certificates/669177.pem
	I1025 22:48:13.908911  720410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669177.pem
	I1025 22:48:13.915282  720410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669177.pem /etc/ssl/certs/51391683.0"
	I1025 22:48:13.928243  720410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6691772.pem && ln -fs /usr/share/ca-certificates/6691772.pem /etc/ssl/certs/6691772.pem"
	I1025 22:48:13.940353  720410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6691772.pem
	I1025 22:48:13.946111  720410 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:46 /usr/share/ca-certificates/6691772.pem
	I1025 22:48:13.946171  720410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6691772.pem
	I1025 22:48:13.952605  720410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6691772.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 22:48:13.964601  720410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 22:48:13.976288  720410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:48:13.981423  720410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:48:13.981490  720410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:48:13.987259  720410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 22:48:13.999189  720410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 22:48:14.005280  720410 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 22:48:14.005338  720410 kubeadm.go:392] StartCluster: {Name:old-k8s-version-005932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-005932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:48:14.005432  720410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 22:48:14.005491  720410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:48:14.059239  720410 cri.go:89] found id: ""
	I1025 22:48:14.059320  720410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 22:48:14.074285  720410 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:48:14.086200  720410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:48:14.097808  720410 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:48:14.097834  720410 kubeadm.go:157] found existing configuration files:
	
	I1025 22:48:14.097885  720410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:48:14.109437  720410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:48:14.109511  720410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:48:14.120292  720410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:48:14.130057  720410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:48:14.130141  720410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:48:14.143165  720410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:48:14.154274  720410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:48:14.154343  720410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:48:14.168495  720410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:48:14.180973  720410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:48:14.181045  720410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:48:14.194807  720410 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 22:48:14.339193  720410 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 22:48:14.339288  720410 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 22:48:14.519585  720410 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:48:14.519763  720410 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:48:14.519916  720410 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:48:14.757954  720410 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:48:14.760996  720410 out.go:235]   - Generating certificates and keys ...
	I1025 22:48:14.761109  720410 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 22:48:14.761193  720410 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 22:48:14.867835  720410 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 22:48:15.166652  720410 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1025 22:48:15.272687  720410 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1025 22:48:15.699611  720410 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1025 22:48:15.979002  720410 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1025 22:48:15.979335  720410 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-005932] and IPs [192.168.39.215 127.0.0.1 ::1]
	I1025 22:48:16.201278  720410 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1025 22:48:16.201486  720410 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-005932] and IPs [192.168.39.215 127.0.0.1 ::1]
	I1025 22:48:16.347482  720410 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 22:48:16.602488  720410 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 22:48:16.918158  720410 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1025 22:48:16.918238  720410 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:48:17.050593  720410 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:48:17.202006  720410 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:48:17.367300  720410 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:48:17.837405  720410 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:48:17.859378  720410 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:48:17.868611  720410 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:48:17.868680  720410 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 22:48:18.040396  720410 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:48:18.042329  720410 out.go:235]   - Booting up control plane ...
	I1025 22:48:18.042465  720410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:48:18.055429  720410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:48:18.057549  720410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:48:18.058621  720410 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:48:18.066500  720410 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:48:58.058860  720410 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:48:58.060321  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:48:58.060574  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:49:03.060647  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:49:03.060984  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:49:13.059785  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:49:13.060014  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:49:33.059061  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:49:33.059362  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:50:13.059727  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:50:13.060001  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:50:13.060041  720410 kubeadm.go:310] 
	I1025 22:50:13.060077  720410 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 22:50:13.060110  720410 kubeadm.go:310] 		timed out waiting for the condition
	I1025 22:50:13.060113  720410 kubeadm.go:310] 
	I1025 22:50:13.060162  720410 kubeadm.go:310] 	This error is likely caused by:
	I1025 22:50:13.060216  720410 kubeadm.go:310] 		- The kubelet is not running
	I1025 22:50:13.060331  720410 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 22:50:13.060340  720410 kubeadm.go:310] 
	I1025 22:50:13.060492  720410 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 22:50:13.060560  720410 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 22:50:13.060609  720410 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 22:50:13.060618  720410 kubeadm.go:310] 
	I1025 22:50:13.060778  720410 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 22:50:13.060905  720410 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 22:50:13.060918  720410 kubeadm.go:310] 
	I1025 22:50:13.061051  720410 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 22:50:13.061132  720410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 22:50:13.061234  720410 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 22:50:13.061336  720410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 22:50:13.061349  720410 kubeadm.go:310] 
	I1025 22:50:13.062055  720410 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:50:13.062148  720410 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 22:50:13.062219  720410 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1025 22:50:13.062394  720410 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-005932] and IPs [192.168.39.215 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-005932] and IPs [192.168.39.215 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-005932] and IPs [192.168.39.215 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-005932] and IPs [192.168.39.215 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 22:50:13.062442  720410 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 22:50:13.516449  720410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:50:13.531200  720410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:50:13.541592  720410 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:50:13.541618  720410 kubeadm.go:157] found existing configuration files:
	
	I1025 22:50:13.541671  720410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:50:13.550958  720410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:50:13.551022  720410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:50:13.560586  720410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:50:13.569987  720410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:50:13.570046  720410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:50:13.579264  720410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:50:13.588231  720410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:50:13.588288  720410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:50:13.597661  720410 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:50:13.606926  720410 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:50:13.606988  720410 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:50:13.616887  720410 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 22:50:13.683942  720410 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 22:50:13.684016  720410 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 22:50:13.832032  720410 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:50:13.832173  720410 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:50:13.832333  720410 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:50:14.009956  720410 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:50:14.011823  720410 out.go:235]   - Generating certificates and keys ...
	I1025 22:50:14.011939  720410 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 22:50:14.012064  720410 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 22:50:14.012187  720410 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 22:50:14.012276  720410 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 22:50:14.012377  720410 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 22:50:14.012449  720410 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 22:50:14.012540  720410 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 22:50:14.012655  720410 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 22:50:14.013271  720410 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 22:50:14.013983  720410 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 22:50:14.014180  720410 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 22:50:14.014264  720410 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:50:14.448119  720410 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:50:14.604836  720410 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:50:14.838712  720410 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:50:15.392247  720410 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:50:15.412263  720410 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:50:15.413306  720410 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:50:15.413380  720410 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 22:50:15.556022  720410 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:50:15.557888  720410 out.go:235]   - Booting up control plane ...
	I1025 22:50:15.557998  720410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:50:15.562139  720410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:50:15.570720  720410 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:50:15.571789  720410 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:50:15.574563  720410 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:50:55.576874  720410 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:50:55.577017  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:50:55.577284  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:51:00.577815  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:51:00.578107  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:51:10.578801  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:51:10.579083  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:51:30.578568  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:51:30.578778  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:52:10.578929  720410 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:52:10.579153  720410 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:52:10.579168  720410 kubeadm.go:310] 
	I1025 22:52:10.579218  720410 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 22:52:10.579307  720410 kubeadm.go:310] 		timed out waiting for the condition
	I1025 22:52:10.579334  720410 kubeadm.go:310] 
	I1025 22:52:10.579385  720410 kubeadm.go:310] 	This error is likely caused by:
	I1025 22:52:10.579420  720410 kubeadm.go:310] 		- The kubelet is not running
	I1025 22:52:10.579564  720410 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 22:52:10.579600  720410 kubeadm.go:310] 
	I1025 22:52:10.579744  720410 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 22:52:10.579798  720410 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 22:52:10.579847  720410 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 22:52:10.579858  720410 kubeadm.go:310] 
	I1025 22:52:10.580005  720410 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 22:52:10.580132  720410 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 22:52:10.580145  720410 kubeadm.go:310] 
	I1025 22:52:10.580285  720410 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 22:52:10.580404  720410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 22:52:10.580504  720410 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 22:52:10.580599  720410 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 22:52:10.580611  720410 kubeadm.go:310] 
	I1025 22:52:10.580895  720410 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:52:10.581056  720410 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 22:52:10.581168  720410 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1025 22:52:10.581221  720410 kubeadm.go:394] duration metric: took 3m56.575887717s to StartCluster
	I1025 22:52:10.581280  720410 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:52:10.581347  720410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:52:10.631733  720410 cri.go:89] found id: ""
	I1025 22:52:10.631771  720410 logs.go:282] 0 containers: []
	W1025 22:52:10.631779  720410 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:52:10.631787  720410 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:52:10.631854  720410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:52:10.668172  720410 cri.go:89] found id: ""
	I1025 22:52:10.668204  720410 logs.go:282] 0 containers: []
	W1025 22:52:10.668213  720410 logs.go:284] No container was found matching "etcd"
	I1025 22:52:10.668219  720410 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:52:10.668279  720410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:52:10.702685  720410 cri.go:89] found id: ""
	I1025 22:52:10.702716  720410 logs.go:282] 0 containers: []
	W1025 22:52:10.702727  720410 logs.go:284] No container was found matching "coredns"
	I1025 22:52:10.702746  720410 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:52:10.702816  720410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:52:10.739470  720410 cri.go:89] found id: ""
	I1025 22:52:10.739509  720410 logs.go:282] 0 containers: []
	W1025 22:52:10.739521  720410 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:52:10.739529  720410 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:52:10.739584  720410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:52:10.772856  720410 cri.go:89] found id: ""
	I1025 22:52:10.772885  720410 logs.go:282] 0 containers: []
	W1025 22:52:10.772893  720410 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:52:10.772898  720410 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:52:10.772969  720410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:52:10.808401  720410 cri.go:89] found id: ""
	I1025 22:52:10.808434  720410 logs.go:282] 0 containers: []
	W1025 22:52:10.808451  720410 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:52:10.808458  720410 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:52:10.808512  720410 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:52:10.842968  720410 cri.go:89] found id: ""
	I1025 22:52:10.842999  720410 logs.go:282] 0 containers: []
	W1025 22:52:10.843008  720410 logs.go:284] No container was found matching "kindnet"
	I1025 22:52:10.843019  720410 logs.go:123] Gathering logs for kubelet ...
	I1025 22:52:10.843032  720410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:52:10.894603  720410 logs.go:123] Gathering logs for dmesg ...
	I1025 22:52:10.894636  720410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:52:10.908999  720410 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:52:10.909033  720410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:52:11.032160  720410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:52:11.032186  720410 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:52:11.032201  720410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:52:11.138969  720410 logs.go:123] Gathering logs for container status ...
	I1025 22:52:11.139029  720410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 22:52:11.180565  720410 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 22:52:11.180630  720410 out.go:270] * 
	* 
	W1025 22:52:11.180687  720410 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 22:52:11.180700  720410 out.go:270] * 
	* 
	W1025 22:52:11.181591  720410 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 22:52:11.185263  720410 out.go:201] 
	W1025 22:52:11.186489  720410 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 22:52:11.186538  720410 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 22:52:11.186557  720410 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 22:52:11.187860  720410 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-005932 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 6 (245.219057ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 22:52:11.474801  725849 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-005932" does not appear in /home/jenkins/minikube-integration/19758-661979/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-005932" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (274.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-005932 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-005932 create -f testdata/busybox.yaml: exit status 1 (49.279653ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-005932" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-005932 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 6 (224.273518ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 22:52:11.748535  725889 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-005932" does not appear in /home/jenkins/minikube-integration/19758-661979/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-005932" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 6 (222.324806ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 22:52:11.972784  725919 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-005932" does not appear in /home/jenkins/minikube-integration/19758-661979/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-005932" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (86.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-005932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1025 22:52:22.936338  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:22.942711  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:22.954043  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:22.975387  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:23.016776  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:23.098186  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:23.259730  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:23.581809  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:24.223196  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:25.504888  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:26.484086  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:28.066565  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:28.693267  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:33.188388  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:43.429746  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:57.097401  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:57.103774  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:57.115235  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:57.136646  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:57.178063  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:57.259556  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:57.421116  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:57.743001  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:58.385231  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:59.666706  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:53:02.228697  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:53:03.911757  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:53:04.794029  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:53:06.946380  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:53:07.350457  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:53:09.655598  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:53:17.592401  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:53:30.876115  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-005932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m25.754905711s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-005932 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-005932 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-005932 describe deploy/metrics-server -n kube-system: exit status 1 (46.157643ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-005932" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-005932 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 6 (245.268302ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 22:53:38.017237  726271 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-005932" does not appear in /home/jenkins/minikube-integration/19758-661979/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-005932" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (86.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (508.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-005932 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1025 22:53:44.873910  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:54:06.912367  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:54:19.036828  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:54:30.022819  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:54:31.577114  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:54:34.615258  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:54:40.902155  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:54:42.624153  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:55:06.795332  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:55:10.325697  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:55:20.932648  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:55:40.959108  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:55:47.013827  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:55:48.635319  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-005932 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m26.955538196s)

                                                
                                                
-- stdout --
	* [old-k8s-version-005932] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19758
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-005932" primary control-plane node in "old-k8s-version-005932" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-005932" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:53:39.644096  726389 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:53:39.644229  726389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:53:39.644239  726389 out.go:358] Setting ErrFile to fd 2...
	I1025 22:53:39.644245  726389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:53:39.644427  726389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:53:39.645029  726389 out.go:352] Setting JSON to false
	I1025 22:53:39.646127  726389 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20164,"bootTime":1729876656,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:53:39.646248  726389 start.go:139] virtualization: kvm guest
	I1025 22:53:39.649053  726389 out.go:177] * [old-k8s-version-005932] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:53:39.650631  726389 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 22:53:39.650630  726389 notify.go:220] Checking for updates...
	I1025 22:53:39.651872  726389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:53:39.653078  726389 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:53:39.654218  726389 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:53:39.655267  726389 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:53:39.656318  726389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:53:39.657882  726389 config.go:182] Loaded profile config "old-k8s-version-005932": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1025 22:53:39.658272  726389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:53:39.658327  726389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:53:39.674812  726389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44267
	I1025 22:53:39.675264  726389 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:53:39.675855  726389 main.go:141] libmachine: Using API Version  1
	I1025 22:53:39.675878  726389 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:53:39.676250  726389 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:53:39.676437  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:53:39.678168  726389 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1025 22:53:39.679399  726389 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 22:53:39.679685  726389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:53:39.679721  726389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:53:39.694726  726389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I1025 22:53:39.695193  726389 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:53:39.695656  726389 main.go:141] libmachine: Using API Version  1
	I1025 22:53:39.695680  726389 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:53:39.695981  726389 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:53:39.696128  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:53:39.732702  726389 out.go:177] * Using the kvm2 driver based on existing profile
	I1025 22:53:39.733954  726389 start.go:297] selected driver: kvm2
	I1025 22:53:39.733969  726389 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-005932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-005932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:53:39.734115  726389 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:53:39.735053  726389 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:53:39.735154  726389 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 22:53:39.750128  726389 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 22:53:39.750593  726389 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 22:53:39.750628  726389 cni.go:84] Creating CNI manager for ""
	I1025 22:53:39.750683  726389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:53:39.750765  726389 start.go:340] cluster config:
	{Name:old-k8s-version-005932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-005932 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:53:39.750866  726389 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:53:39.752573  726389 out.go:177] * Starting "old-k8s-version-005932" primary control-plane node in "old-k8s-version-005932" cluster
	I1025 22:53:39.753776  726389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1025 22:53:39.753814  726389 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1025 22:53:39.753825  726389 cache.go:56] Caching tarball of preloaded images
	I1025 22:53:39.753896  726389 preload.go:172] Found /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 22:53:39.753905  726389 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1025 22:53:39.753993  726389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/config.json ...
	I1025 22:53:39.754174  726389 start.go:360] acquireMachinesLock for old-k8s-version-005932: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 22:53:39.754213  726389 start.go:364] duration metric: took 21.965µs to acquireMachinesLock for "old-k8s-version-005932"
	I1025 22:53:39.754257  726389 start.go:96] Skipping create...Using existing machine configuration
	I1025 22:53:39.754268  726389 fix.go:54] fixHost starting: 
	I1025 22:53:39.754520  726389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:53:39.754564  726389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:53:39.768936  726389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I1025 22:53:39.769414  726389 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:53:39.769938  726389 main.go:141] libmachine: Using API Version  1
	I1025 22:53:39.769962  726389 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:53:39.770255  726389 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:53:39.770431  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:53:39.770564  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetState
	I1025 22:53:39.771928  726389 fix.go:112] recreateIfNeeded on old-k8s-version-005932: state=Stopped err=<nil>
	I1025 22:53:39.771953  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	W1025 22:53:39.772093  726389 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 22:53:39.774449  726389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-005932" ...
	I1025 22:53:39.775678  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .Start
	I1025 22:53:39.775852  726389 main.go:141] libmachine: (old-k8s-version-005932) starting domain...
	I1025 22:53:39.775876  726389 main.go:141] libmachine: (old-k8s-version-005932) ensuring networks are active...
	I1025 22:53:39.776524  726389 main.go:141] libmachine: (old-k8s-version-005932) Ensuring network default is active
	I1025 22:53:39.776924  726389 main.go:141] libmachine: (old-k8s-version-005932) Ensuring network mk-old-k8s-version-005932 is active
	I1025 22:53:39.777499  726389 main.go:141] libmachine: (old-k8s-version-005932) getting domain XML...
	I1025 22:53:39.778171  726389 main.go:141] libmachine: (old-k8s-version-005932) creating domain...
	I1025 22:53:41.021383  726389 main.go:141] libmachine: (old-k8s-version-005932) waiting for IP...
	I1025 22:53:41.022339  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:41.022768  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:41.022840  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:41.022765  726424 retry.go:31] will retry after 196.118604ms: waiting for domain to come up
	I1025 22:53:41.220149  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:41.220682  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:41.220701  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:41.220651  726424 retry.go:31] will retry after 290.738468ms: waiting for domain to come up
	I1025 22:53:41.513411  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:41.514073  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:41.514106  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:41.514052  726424 retry.go:31] will retry after 407.920853ms: waiting for domain to come up
	I1025 22:53:41.923654  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:41.924255  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:41.924286  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:41.924206  726424 retry.go:31] will retry after 520.336864ms: waiting for domain to come up
	I1025 22:53:42.445705  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:42.446298  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:42.446323  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:42.446264  726424 retry.go:31] will retry after 569.658596ms: waiting for domain to come up
	I1025 22:53:43.017958  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:43.018311  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:43.018347  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:43.018292  726424 retry.go:31] will retry after 946.981411ms: waiting for domain to come up
	I1025 22:53:43.967330  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:43.967776  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:43.967803  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:43.967751  726424 retry.go:31] will retry after 932.478663ms: waiting for domain to come up
	I1025 22:53:44.901865  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:44.902406  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:44.902463  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:44.902387  726424 retry.go:31] will retry after 1.477175621s: waiting for domain to come up
	I1025 22:53:46.382058  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:46.382635  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:46.382672  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:46.382610  726424 retry.go:31] will retry after 1.846238527s: waiting for domain to come up
	I1025 22:53:48.231520  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:48.231976  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:48.232003  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:48.231944  726424 retry.go:31] will retry after 1.638262984s: waiting for domain to come up
	I1025 22:53:49.872260  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:49.872780  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:49.872806  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:49.872746  726424 retry.go:31] will retry after 2.506171177s: waiting for domain to come up
	I1025 22:53:52.382106  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:52.382583  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:52.382630  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:52.382569  726424 retry.go:31] will retry after 2.796262921s: waiting for domain to come up
	I1025 22:53:55.180675  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:55.181241  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | unable to find current IP address of domain old-k8s-version-005932 in network mk-old-k8s-version-005932
	I1025 22:53:55.181270  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | I1025 22:53:55.181208  726424 retry.go:31] will retry after 4.037426409s: waiting for domain to come up
	I1025 22:53:59.222010  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.222556  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has current primary IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.222582  726389 main.go:141] libmachine: (old-k8s-version-005932) found domain IP: 192.168.39.215
	I1025 22:53:59.222597  726389 main.go:141] libmachine: (old-k8s-version-005932) reserving static IP address...
	I1025 22:53:59.222961  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "old-k8s-version-005932", mac: "52:54:00:fd:66:94", ip: "192.168.39.215"} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.222991  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | skip adding static IP to network mk-old-k8s-version-005932 - found existing host DHCP lease matching {name: "old-k8s-version-005932", mac: "52:54:00:fd:66:94", ip: "192.168.39.215"}
	I1025 22:53:59.223004  726389 main.go:141] libmachine: (old-k8s-version-005932) reserved static IP address 192.168.39.215 for domain old-k8s-version-005932
	I1025 22:53:59.223018  726389 main.go:141] libmachine: (old-k8s-version-005932) waiting for SSH...
	I1025 22:53:59.223032  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | Getting to WaitForSSH function...
	I1025 22:53:59.225048  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.225370  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.225420  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.225441  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | Using SSH client type: external
	I1025 22:53:59.225476  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | Using SSH private key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa (-rw-------)
	I1025 22:53:59.225517  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 22:53:59.225530  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | About to run SSH command:
	I1025 22:53:59.225553  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | exit 0
	I1025 22:53:59.349111  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | SSH cmd err, output: <nil>: 
	I1025 22:53:59.349501  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetConfigRaw
	I1025 22:53:59.350175  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetIP
	I1025 22:53:59.352803  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.353249  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.353278  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.353517  726389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/config.json ...
	I1025 22:53:59.353701  726389 machine.go:93] provisionDockerMachine start ...
	I1025 22:53:59.353720  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:53:59.353943  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:53:59.356238  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.356608  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.356652  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.356762  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:53:59.356923  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:53:59.357100  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:53:59.357251  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:53:59.357404  726389 main.go:141] libmachine: Using SSH client type: native
	I1025 22:53:59.357595  726389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:53:59.357604  726389 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 22:53:59.457392  726389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 22:53:59.457424  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetMachineName
	I1025 22:53:59.457695  726389 buildroot.go:166] provisioning hostname "old-k8s-version-005932"
	I1025 22:53:59.457724  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetMachineName
	I1025 22:53:59.457965  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:53:59.461062  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.461473  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.461508  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.461647  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:53:59.461841  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:53:59.462041  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:53:59.462211  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:53:59.462380  726389 main.go:141] libmachine: Using SSH client type: native
	I1025 22:53:59.462657  726389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:53:59.462679  726389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-005932 && echo "old-k8s-version-005932" | sudo tee /etc/hostname
	I1025 22:53:59.584362  726389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-005932
	
	I1025 22:53:59.584396  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:53:59.587100  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.587492  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.587526  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.587658  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:53:59.587861  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:53:59.588011  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:53:59.588143  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:53:59.588294  726389 main.go:141] libmachine: Using SSH client type: native
	I1025 22:53:59.588501  726389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:53:59.588528  726389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-005932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-005932/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-005932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 22:53:59.694594  726389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:53:59.694633  726389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19758-661979/.minikube CaCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19758-661979/.minikube}
	I1025 22:53:59.694669  726389 buildroot.go:174] setting up certificates
	I1025 22:53:59.694680  726389 provision.go:84] configureAuth start
	I1025 22:53:59.694689  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetMachineName
	I1025 22:53:59.694949  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetIP
	I1025 22:53:59.697638  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.697989  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.698019  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.698160  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:53:59.700367  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.700677  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.700718  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.700830  726389 provision.go:143] copyHostCerts
	I1025 22:53:59.700909  726389 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem, removing ...
	I1025 22:53:59.700950  726389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem
	I1025 22:53:59.701047  726389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem (1082 bytes)
	I1025 22:53:59.701187  726389 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem, removing ...
	I1025 22:53:59.701200  726389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem
	I1025 22:53:59.701243  726389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem (1123 bytes)
	I1025 22:53:59.701338  726389 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem, removing ...
	I1025 22:53:59.701348  726389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem
	I1025 22:53:59.701385  726389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem (1675 bytes)
	I1025 22:53:59.701476  726389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-005932 san=[127.0.0.1 192.168.39.215 localhost minikube old-k8s-version-005932]
	I1025 22:53:59.786646  726389 provision.go:177] copyRemoteCerts
	I1025 22:53:59.786720  726389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 22:53:59.786754  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:53:59.789473  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.789764  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.789792  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.789976  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:53:59.790177  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:53:59.790301  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:53:59.790464  726389 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa Username:docker}
	I1025 22:53:59.871525  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 22:53:59.897762  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 22:53:59.925554  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 22:53:59.949620  726389 provision.go:87] duration metric: took 254.923135ms to configureAuth
	I1025 22:53:59.949655  726389 buildroot.go:189] setting minikube options for container-runtime
	I1025 22:53:59.949903  726389 config.go:182] Loaded profile config "old-k8s-version-005932": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1025 22:53:59.950003  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:53:59.952882  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.953253  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:53:59.953286  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:53:59.953442  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:53:59.953648  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:53:59.953825  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:53:59.953949  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:53:59.954065  726389 main.go:141] libmachine: Using SSH client type: native
	I1025 22:53:59.954291  726389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:53:59.954312  726389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 22:54:00.182234  726389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 22:54:00.182262  726389 machine.go:96] duration metric: took 828.548177ms to provisionDockerMachine
	I1025 22:54:00.182277  726389 start.go:293] postStartSetup for "old-k8s-version-005932" (driver="kvm2")
	I1025 22:54:00.182291  726389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 22:54:00.182316  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:54:00.182680  726389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 22:54:00.182719  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:54:00.185522  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.185917  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:54:00.185942  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.186063  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:54:00.186282  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:54:00.186449  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:54:00.186601  726389 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa Username:docker}
	I1025 22:54:00.268101  726389 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 22:54:00.272636  726389 info.go:137] Remote host: Buildroot 2023.02.9
	I1025 22:54:00.272665  726389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/addons for local assets ...
	I1025 22:54:00.272719  726389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/files for local assets ...
	I1025 22:54:00.272798  726389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem -> 6691772.pem in /etc/ssl/certs
	I1025 22:54:00.272921  726389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 22:54:00.282522  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:54:00.306757  726389 start.go:296] duration metric: took 124.461523ms for postStartSetup
	I1025 22:54:00.306797  726389 fix.go:56] duration metric: took 20.552529632s for fixHost
	I1025 22:54:00.306819  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:54:00.309411  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.309819  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:54:00.309852  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.310063  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:54:00.310266  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:54:00.310461  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:54:00.310593  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:54:00.310773  726389 main.go:141] libmachine: Using SSH client type: native
	I1025 22:54:00.310983  726389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1025 22:54:00.310995  726389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 22:54:00.417908  726389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729896840.375992857
	
	I1025 22:54:00.417934  726389 fix.go:216] guest clock: 1729896840.375992857
	I1025 22:54:00.417942  726389 fix.go:229] Guest: 2024-10-25 22:54:00.375992857 +0000 UTC Remote: 2024-10-25 22:54:00.306801813 +0000 UTC m=+20.702643676 (delta=69.191044ms)
	I1025 22:54:00.417963  726389 fix.go:200] guest clock delta is within tolerance: 69.191044ms
	I1025 22:54:00.417968  726389 start.go:83] releasing machines lock for "old-k8s-version-005932", held for 20.663746286s
	I1025 22:54:00.417986  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:54:00.418217  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetIP
	I1025 22:54:00.420723  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.421102  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:54:00.421150  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.421329  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:54:00.421794  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:54:00.421959  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .DriverName
	I1025 22:54:00.422082  726389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 22:54:00.422178  726389 ssh_runner.go:195] Run: cat /version.json
	I1025 22:54:00.422193  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:54:00.422209  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHHostname
	I1025 22:54:00.424544  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.424879  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:54:00.424908  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.424933  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.425024  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:54:00.425217  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:54:00.425314  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:54:00.425337  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:00.425378  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:54:00.425533  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHPort
	I1025 22:54:00.425552  726389 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa Username:docker}
	I1025 22:54:00.425648  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHKeyPath
	I1025 22:54:00.425774  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetSSHUsername
	I1025 22:54:00.425907  726389 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/old-k8s-version-005932/id_rsa Username:docker}
	I1025 22:54:00.527701  726389 ssh_runner.go:195] Run: systemctl --version
	I1025 22:54:00.535164  726389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 22:54:00.680968  726389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 22:54:00.687854  726389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 22:54:00.687937  726389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 22:54:00.704184  726389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 22:54:00.704212  726389 start.go:495] detecting cgroup driver to use...
	I1025 22:54:00.704284  726389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 22:54:00.719691  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 22:54:00.734098  726389 docker.go:217] disabling cri-docker service (if available) ...
	I1025 22:54:00.734169  726389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 22:54:00.748157  726389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 22:54:00.761504  726389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 22:54:00.879510  726389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 22:54:01.029020  726389 docker.go:233] disabling docker service ...
	I1025 22:54:01.029086  726389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 22:54:01.053463  726389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 22:54:01.067393  726389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 22:54:01.211548  726389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 22:54:01.355090  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 22:54:01.369791  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:54:01.391506  726389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1025 22:54:01.391585  726389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:54:01.402515  726389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 22:54:01.402580  726389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:54:01.412727  726389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:54:01.422690  726389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:54:01.434313  726389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 22:54:01.445129  726389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 22:54:01.454301  726389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 22:54:01.454363  726389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 22:54:01.468144  726389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 22:54:01.478219  726389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:54:01.611728  726389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 22:54:01.702714  726389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 22:54:01.702800  726389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 22:54:01.708280  726389 start.go:563] Will wait 60s for crictl version
	I1025 22:54:01.708349  726389 ssh_runner.go:195] Run: which crictl
	I1025 22:54:01.712166  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 22:54:01.754374  726389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 22:54:01.754469  726389 ssh_runner.go:195] Run: crio --version
	I1025 22:54:01.783254  726389 ssh_runner.go:195] Run: crio --version
	I1025 22:54:01.812662  726389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1025 22:54:01.813854  726389 main.go:141] libmachine: (old-k8s-version-005932) Calling .GetIP
	I1025 22:54:01.816497  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:01.816814  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:66:94", ip: ""} in network mk-old-k8s-version-005932: {Iface:virbr3 ExpiryTime:2024-10-25 23:53:51 +0000 UTC Type:0 Mac:52:54:00:fd:66:94 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:old-k8s-version-005932 Clientid:01:52:54:00:fd:66:94}
	I1025 22:54:01.816854  726389 main.go:141] libmachine: (old-k8s-version-005932) DBG | domain old-k8s-version-005932 has defined IP address 192.168.39.215 and MAC address 52:54:00:fd:66:94 in network mk-old-k8s-version-005932
	I1025 22:54:01.817122  726389 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1025 22:54:01.821198  726389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:54:01.833662  726389 kubeadm.go:883] updating cluster {Name:old-k8s-version-005932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-005932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 22:54:01.833819  726389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1025 22:54:01.833882  726389 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:54:01.877339  726389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1025 22:54:01.877413  726389 ssh_runner.go:195] Run: which lz4
	I1025 22:54:01.881544  726389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 22:54:01.885715  726389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 22:54:01.885754  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1025 22:54:03.481967  726389 crio.go:462] duration metric: took 1.600452798s to copy over tarball
	I1025 22:54:03.482045  726389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 22:54:06.364473  726389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.882394084s)
	I1025 22:54:06.364507  726389 crio.go:469] duration metric: took 2.882507816s to extract the tarball
	I1025 22:54:06.364517  726389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 22:54:06.409395  726389 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:54:06.443547  726389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1025 22:54:06.443577  726389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 22:54:06.443690  726389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:54:06.443696  726389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:54:06.443771  726389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:54:06.443704  726389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1025 22:54:06.443758  726389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:54:06.443773  726389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1025 22:54:06.443740  726389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1025 22:54:06.443845  726389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:54:06.445677  726389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1025 22:54:06.445688  726389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:54:06.445693  726389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1025 22:54:06.445748  726389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:54:06.445756  726389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:54:06.445672  726389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:54:06.445777  726389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1025 22:54:06.445774  726389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:54:06.624701  726389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1025 22:54:06.637293  726389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:54:06.649530  726389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1025 22:54:06.660846  726389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:54:06.661914  726389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1025 22:54:06.702782  726389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1025 22:54:06.702860  726389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1025 22:54:06.702921  726389 ssh_runner.go:195] Run: which crictl
	I1025 22:54:06.713875  726389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:54:06.725207  726389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1025 22:54:06.725261  726389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:54:06.725314  726389 ssh_runner.go:195] Run: which crictl
	I1025 22:54:06.730884  726389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:54:06.778171  726389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1025 22:54:06.778230  726389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1025 22:54:06.778269  726389 ssh_runner.go:195] Run: which crictl
	I1025 22:54:06.808653  726389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1025 22:54:06.808701  726389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1025 22:54:06.808709  726389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1025 22:54:06.808743  726389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:54:06.808747  726389 ssh_runner.go:195] Run: which crictl
	I1025 22:54:06.808774  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1025 22:54:06.808781  726389 ssh_runner.go:195] Run: which crictl
	I1025 22:54:06.853243  726389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1025 22:54:06.853286  726389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:54:06.853349  726389 ssh_runner.go:195] Run: which crictl
	I1025 22:54:06.853348  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:54:06.853447  726389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1025 22:54:06.853525  726389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:54:06.853539  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 22:54:06.853556  726389 ssh_runner.go:195] Run: which crictl
	I1025 22:54:06.853500  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1025 22:54:06.853644  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:54:06.895710  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:54:06.895955  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1025 22:54:06.992223  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:54:06.992258  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:54:06.992316  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1025 22:54:06.992329  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:54:07.002944  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 22:54:07.026516  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:54:07.026612  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1025 22:54:07.171198  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1025 22:54:07.175848  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1025 22:54:07.175915  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:54:07.175898  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1025 22:54:07.175977  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1025 22:54:07.176065  726389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1025 22:54:07.176102  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1025 22:54:07.302132  726389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1025 22:54:07.307940  726389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1025 22:54:07.308038  726389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1025 22:54:07.308121  726389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1025 22:54:07.308173  726389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1025 22:54:07.313527  726389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1025 22:54:07.343262  726389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1025 22:54:07.608564  726389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:54:07.749273  726389 cache_images.go:92] duration metric: took 1.305672426s to LoadCachedImages
	W1025 22:54:07.749368  726389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19758-661979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1025 22:54:07.749393  726389 kubeadm.go:934] updating node { 192.168.39.215 8443 v1.20.0 crio true true} ...
	I1025 22:54:07.749506  726389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-005932 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-005932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 22:54:07.749574  726389 ssh_runner.go:195] Run: crio config
	I1025 22:54:07.800802  726389 cni.go:84] Creating CNI manager for ""
	I1025 22:54:07.800830  726389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:54:07.800845  726389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 22:54:07.800874  726389 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-005932 NodeName:old-k8s-version-005932 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 22:54:07.801124  726389 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-005932"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 22:54:07.801208  726389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1025 22:54:07.812620  726389 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 22:54:07.812695  726389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 22:54:07.823005  726389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1025 22:54:07.842915  726389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 22:54:07.862375  726389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1025 22:54:07.882459  726389 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1025 22:54:07.886857  726389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:54:07.900741  726389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:54:08.030287  726389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:54:08.049334  726389 certs.go:68] Setting up /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932 for IP: 192.168.39.215
	I1025 22:54:08.049365  726389 certs.go:194] generating shared ca certs ...
	I1025 22:54:08.049397  726389 certs.go:226] acquiring lock for ca certs: {Name:mkb3c0b02dcf77d22d8e250175d24731a46db4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:54:08.049600  726389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key
	I1025 22:54:08.049658  726389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key
	I1025 22:54:08.049673  726389 certs.go:256] generating profile certs ...
	I1025 22:54:08.049829  726389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/client.key
	I1025 22:54:08.049900  726389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.key.fb60c9ca
	I1025 22:54:08.049959  726389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.key
	I1025 22:54:08.050113  726389 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem (1338 bytes)
	W1025 22:54:08.050156  726389 certs.go:480] ignoring /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177_empty.pem, impossibly tiny 0 bytes
	I1025 22:54:08.050172  726389 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 22:54:08.050204  726389 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem (1082 bytes)
	I1025 22:54:08.050236  726389 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem (1123 bytes)
	I1025 22:54:08.050266  726389 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem (1675 bytes)
	I1025 22:54:08.050322  726389 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:54:08.051013  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 22:54:08.090890  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 22:54:08.129352  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 22:54:08.160009  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 22:54:08.192758  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 22:54:08.235958  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 22:54:08.273587  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 22:54:08.320798  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/old-k8s-version-005932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 22:54:08.347498  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 22:54:08.372099  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem --> /usr/share/ca-certificates/669177.pem (1338 bytes)
	I1025 22:54:08.397766  726389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /usr/share/ca-certificates/6691772.pem (1708 bytes)
	I1025 22:54:08.423731  726389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 22:54:08.440554  726389 ssh_runner.go:195] Run: openssl version
	I1025 22:54:08.446968  726389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669177.pem && ln -fs /usr/share/ca-certificates/669177.pem /etc/ssl/certs/669177.pem"
	I1025 22:54:08.459627  726389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669177.pem
	I1025 22:54:08.464597  726389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:46 /usr/share/ca-certificates/669177.pem
	I1025 22:54:08.464657  726389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669177.pem
	I1025 22:54:08.471044  726389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669177.pem /etc/ssl/certs/51391683.0"
	I1025 22:54:08.482184  726389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6691772.pem && ln -fs /usr/share/ca-certificates/6691772.pem /etc/ssl/certs/6691772.pem"
	I1025 22:54:08.493129  726389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6691772.pem
	I1025 22:54:08.497580  726389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:46 /usr/share/ca-certificates/6691772.pem
	I1025 22:54:08.497644  726389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6691772.pem
	I1025 22:54:08.503447  726389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6691772.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 22:54:08.515314  726389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 22:54:08.527120  726389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:54:08.531979  726389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:54:08.532037  726389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:54:08.538292  726389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 22:54:08.551101  726389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 22:54:08.556004  726389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 22:54:08.562402  726389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 22:54:08.568525  726389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 22:54:08.575235  726389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 22:54:08.581435  726389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 22:54:08.587724  726389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 22:54:08.594162  726389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-005932 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-005932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:54:08.594287  726389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 22:54:08.594346  726389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:54:08.633106  726389 cri.go:89] found id: ""
	I1025 22:54:08.633178  726389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 22:54:08.644203  726389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 22:54:08.644225  726389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 22:54:08.644270  726389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 22:54:08.654690  726389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:54:08.655892  726389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-005932" does not appear in /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:54:08.656473  726389 kubeconfig.go:62] /home/jenkins/minikube-integration/19758-661979/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-005932" cluster setting kubeconfig missing "old-k8s-version-005932" context setting]
	I1025 22:54:08.657379  726389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:54:08.716477  726389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 22:54:08.727798  726389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.215
	I1025 22:54:08.727835  726389 kubeadm.go:1160] stopping kube-system containers ...
	I1025 22:54:08.727849  726389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 22:54:08.727907  726389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:54:08.768897  726389 cri.go:89] found id: ""
	I1025 22:54:08.769025  726389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 22:54:08.787233  726389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:54:08.798311  726389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:54:08.798338  726389 kubeadm.go:157] found existing configuration files:
	
	I1025 22:54:08.798400  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:54:08.808364  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:54:08.808424  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:54:08.819135  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:54:08.829665  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:54:08.829740  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:54:08.841060  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:54:08.852718  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:54:08.852785  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:54:08.863433  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:54:08.874485  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:54:08.874576  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:54:08.885423  726389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:54:08.897774  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:54:09.032552  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:54:09.698075  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:54:09.929810  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:54:10.051138  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:54:10.139616  726389 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:54:10.139732  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:10.640511  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:11.139838  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:11.639803  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:12.139767  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:12.640025  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:13.140436  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:13.640585  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:14.139998  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:14.640544  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:15.139884  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:15.640708  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:16.140207  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:16.639860  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:17.140695  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:17.640411  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:18.140103  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:18.640033  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:19.140346  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:19.640598  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:20.140272  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:20.639822  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:21.140537  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:21.640174  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:22.140052  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:22.639979  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:23.140267  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:23.640026  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:24.140134  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:24.640792  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:25.140840  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:25.640870  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:26.140050  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:26.640412  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:27.139966  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:27.640507  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:28.140320  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:28.640327  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:29.140416  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:29.640221  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:30.140005  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:30.640838  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:31.140036  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:31.640450  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:32.140014  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:32.639913  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:33.140639  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:33.640465  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:34.140492  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:34.640187  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:35.140804  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:35.640617  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:36.139950  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:36.640431  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:37.140229  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:37.640741  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:38.140305  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:38.639828  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:39.140741  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:39.640687  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:40.140442  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:40.640622  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:41.140299  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:41.640435  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:42.140485  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:42.640658  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:43.140578  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:43.640584  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:44.139976  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:44.640776  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:45.140685  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:45.640179  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:46.139814  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:46.640192  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:47.140124  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:47.639852  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:48.139951  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:48.639928  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:49.140063  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:49.640588  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:50.139904  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:50.640790  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:51.140431  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:51.640506  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:52.140593  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:52.640266  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:53.140205  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:53.640654  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:54.140439  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:54.640620  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:55.140548  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:55.639883  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:56.140369  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:56.640634  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:57.139880  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:57.640535  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:58.140407  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:58.640683  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:59.139958  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:54:59.640359  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:00.140822  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:00.640499  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:01.140826  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:01.640612  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:02.140793  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:02.640688  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:03.140639  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:03.640182  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:04.140619  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:04.640058  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:05.139833  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:05.639934  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:06.140418  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:06.639880  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:07.140815  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:07.640613  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:08.140439  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:08.640517  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:09.140356  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:09.639977  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:10.140677  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:10.140781  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:10.181289  726389 cri.go:89] found id: ""
	I1025 22:55:10.181323  726389 logs.go:282] 0 containers: []
	W1025 22:55:10.181334  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:10.181342  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:10.181430  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:10.222270  726389 cri.go:89] found id: ""
	I1025 22:55:10.222299  726389 logs.go:282] 0 containers: []
	W1025 22:55:10.222307  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:10.222313  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:10.222386  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:10.256375  726389 cri.go:89] found id: ""
	I1025 22:55:10.256415  726389 logs.go:282] 0 containers: []
	W1025 22:55:10.256427  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:10.256441  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:10.256504  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:10.304194  726389 cri.go:89] found id: ""
	I1025 22:55:10.304243  726389 logs.go:282] 0 containers: []
	W1025 22:55:10.304255  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:10.304264  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:10.304332  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:10.337970  726389 cri.go:89] found id: ""
	I1025 22:55:10.337999  726389 logs.go:282] 0 containers: []
	W1025 22:55:10.338007  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:10.338014  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:10.338065  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:10.376386  726389 cri.go:89] found id: ""
	I1025 22:55:10.376424  726389 logs.go:282] 0 containers: []
	W1025 22:55:10.376435  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:10.376443  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:10.376506  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:10.413592  726389 cri.go:89] found id: ""
	I1025 22:55:10.413623  726389 logs.go:282] 0 containers: []
	W1025 22:55:10.413634  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:10.413643  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:10.413717  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:10.447252  726389 cri.go:89] found id: ""
	I1025 22:55:10.447301  726389 logs.go:282] 0 containers: []
	W1025 22:55:10.447310  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:10.447321  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:10.447334  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:10.498628  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:10.498676  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:10.515998  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:10.516034  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:10.649345  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:10.649377  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:10.649395  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:10.725303  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:10.725356  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:13.270149  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:13.285186  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:13.285255  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:13.324925  726389 cri.go:89] found id: ""
	I1025 22:55:13.324969  726389 logs.go:282] 0 containers: []
	W1025 22:55:13.324982  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:13.324990  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:13.325054  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:13.367341  726389 cri.go:89] found id: ""
	I1025 22:55:13.367374  726389 logs.go:282] 0 containers: []
	W1025 22:55:13.367382  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:13.367391  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:13.367464  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:13.406789  726389 cri.go:89] found id: ""
	I1025 22:55:13.406814  726389 logs.go:282] 0 containers: []
	W1025 22:55:13.406822  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:13.406827  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:13.406888  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:13.440976  726389 cri.go:89] found id: ""
	I1025 22:55:13.441007  726389 logs.go:282] 0 containers: []
	W1025 22:55:13.441016  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:13.441022  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:13.441088  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:13.475570  726389 cri.go:89] found id: ""
	I1025 22:55:13.475601  726389 logs.go:282] 0 containers: []
	W1025 22:55:13.475612  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:13.475620  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:13.475685  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:13.536742  726389 cri.go:89] found id: ""
	I1025 22:55:13.536772  726389 logs.go:282] 0 containers: []
	W1025 22:55:13.536784  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:13.536792  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:13.536858  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:13.587887  726389 cri.go:89] found id: ""
	I1025 22:55:13.587919  726389 logs.go:282] 0 containers: []
	W1025 22:55:13.587928  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:13.587934  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:13.587988  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:13.626462  726389 cri.go:89] found id: ""
	I1025 22:55:13.626496  726389 logs.go:282] 0 containers: []
	W1025 22:55:13.626504  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:13.626515  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:13.626530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:13.676855  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:13.676916  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:13.692181  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:13.692213  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:13.774053  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:13.774081  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:13.774096  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:13.850812  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:13.850855  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:16.392139  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:16.408460  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:16.408527  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:16.442256  726389 cri.go:89] found id: ""
	I1025 22:55:16.442298  726389 logs.go:282] 0 containers: []
	W1025 22:55:16.442311  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:16.442319  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:16.442385  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:16.480594  726389 cri.go:89] found id: ""
	I1025 22:55:16.480629  726389 logs.go:282] 0 containers: []
	W1025 22:55:16.480640  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:16.480649  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:16.480718  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:16.519319  726389 cri.go:89] found id: ""
	I1025 22:55:16.519349  726389 logs.go:282] 0 containers: []
	W1025 22:55:16.519359  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:16.519367  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:16.519428  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:16.559284  726389 cri.go:89] found id: ""
	I1025 22:55:16.559310  726389 logs.go:282] 0 containers: []
	W1025 22:55:16.559319  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:16.559325  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:16.559389  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:16.596351  726389 cri.go:89] found id: ""
	I1025 22:55:16.596379  726389 logs.go:282] 0 containers: []
	W1025 22:55:16.596387  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:16.596394  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:16.596459  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:16.629999  726389 cri.go:89] found id: ""
	I1025 22:55:16.630028  726389 logs.go:282] 0 containers: []
	W1025 22:55:16.630038  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:16.630047  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:16.630111  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:16.668423  726389 cri.go:89] found id: ""
	I1025 22:55:16.668451  726389 logs.go:282] 0 containers: []
	W1025 22:55:16.668459  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:16.668467  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:16.668527  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:16.708710  726389 cri.go:89] found id: ""
	I1025 22:55:16.708745  726389 logs.go:282] 0 containers: []
	W1025 22:55:16.708756  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:16.708770  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:16.708786  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:16.789043  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:16.789085  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:16.830109  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:16.830139  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:16.878917  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:16.878956  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:16.893003  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:16.893034  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:16.961896  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:19.462104  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:19.475892  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:19.475968  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:19.511455  726389 cri.go:89] found id: ""
	I1025 22:55:19.511489  726389 logs.go:282] 0 containers: []
	W1025 22:55:19.511501  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:19.511512  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:19.511587  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:19.548048  726389 cri.go:89] found id: ""
	I1025 22:55:19.548078  726389 logs.go:282] 0 containers: []
	W1025 22:55:19.548086  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:19.548092  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:19.548144  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:19.593606  726389 cri.go:89] found id: ""
	I1025 22:55:19.593656  726389 logs.go:282] 0 containers: []
	W1025 22:55:19.593676  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:19.593684  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:19.593753  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:19.629647  726389 cri.go:89] found id: ""
	I1025 22:55:19.629679  726389 logs.go:282] 0 containers: []
	W1025 22:55:19.629688  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:19.629695  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:19.629754  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:19.665833  726389 cri.go:89] found id: ""
	I1025 22:55:19.665865  726389 logs.go:282] 0 containers: []
	W1025 22:55:19.665876  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:19.665883  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:19.665946  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:19.702032  726389 cri.go:89] found id: ""
	I1025 22:55:19.702064  726389 logs.go:282] 0 containers: []
	W1025 22:55:19.702073  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:19.702080  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:19.702138  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:19.742139  726389 cri.go:89] found id: ""
	I1025 22:55:19.742175  726389 logs.go:282] 0 containers: []
	W1025 22:55:19.742187  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:19.742197  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:19.742274  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:19.777112  726389 cri.go:89] found id: ""
	I1025 22:55:19.777146  726389 logs.go:282] 0 containers: []
	W1025 22:55:19.777155  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:19.777165  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:19.777179  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:19.828879  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:19.828917  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:19.842494  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:19.842523  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:19.928347  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:19.928380  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:19.928398  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:20.018214  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:20.018263  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:22.559596  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:22.574757  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:22.574839  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:22.615051  726389 cri.go:89] found id: ""
	I1025 22:55:22.615078  726389 logs.go:282] 0 containers: []
	W1025 22:55:22.615087  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:22.615093  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:22.615159  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:22.656014  726389 cri.go:89] found id: ""
	I1025 22:55:22.656048  726389 logs.go:282] 0 containers: []
	W1025 22:55:22.656056  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:22.656063  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:22.656121  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:22.695875  726389 cri.go:89] found id: ""
	I1025 22:55:22.695910  726389 logs.go:282] 0 containers: []
	W1025 22:55:22.695922  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:22.695930  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:22.695998  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:22.733321  726389 cri.go:89] found id: ""
	I1025 22:55:22.733358  726389 logs.go:282] 0 containers: []
	W1025 22:55:22.733369  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:22.733378  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:22.733452  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:22.768921  726389 cri.go:89] found id: ""
	I1025 22:55:22.768971  726389 logs.go:282] 0 containers: []
	W1025 22:55:22.768984  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:22.768992  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:22.769053  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:22.811942  726389 cri.go:89] found id: ""
	I1025 22:55:22.811972  726389 logs.go:282] 0 containers: []
	W1025 22:55:22.811984  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:22.811993  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:22.812057  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:22.848821  726389 cri.go:89] found id: ""
	I1025 22:55:22.848849  726389 logs.go:282] 0 containers: []
	W1025 22:55:22.848858  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:22.848865  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:22.848920  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:22.886009  726389 cri.go:89] found id: ""
	I1025 22:55:22.886039  726389 logs.go:282] 0 containers: []
	W1025 22:55:22.886051  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:22.886069  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:22.886089  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:22.953202  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:22.953224  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:22.953237  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:23.033605  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:23.033646  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:23.071875  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:23.071907  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:23.123803  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:23.123844  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:25.639584  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:25.653562  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:25.653640  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:25.697247  726389 cri.go:89] found id: ""
	I1025 22:55:25.697282  726389 logs.go:282] 0 containers: []
	W1025 22:55:25.697295  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:25.697304  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:25.697372  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:25.752011  726389 cri.go:89] found id: ""
	I1025 22:55:25.752045  726389 logs.go:282] 0 containers: []
	W1025 22:55:25.752057  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:25.752066  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:25.752134  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:25.808937  726389 cri.go:89] found id: ""
	I1025 22:55:25.808977  726389 logs.go:282] 0 containers: []
	W1025 22:55:25.808989  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:25.808998  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:25.809074  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:25.844133  726389 cri.go:89] found id: ""
	I1025 22:55:25.844171  726389 logs.go:282] 0 containers: []
	W1025 22:55:25.844183  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:25.844191  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:25.844252  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:25.878230  726389 cri.go:89] found id: ""
	I1025 22:55:25.878260  726389 logs.go:282] 0 containers: []
	W1025 22:55:25.878268  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:25.878274  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:25.878331  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:25.914069  726389 cri.go:89] found id: ""
	I1025 22:55:25.914095  726389 logs.go:282] 0 containers: []
	W1025 22:55:25.914103  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:25.914109  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:25.914156  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:25.947928  726389 cri.go:89] found id: ""
	I1025 22:55:25.947969  726389 logs.go:282] 0 containers: []
	W1025 22:55:25.947979  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:25.947986  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:25.948056  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:25.985887  726389 cri.go:89] found id: ""
	I1025 22:55:25.985917  726389 logs.go:282] 0 containers: []
	W1025 22:55:25.985926  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:25.985936  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:25.985953  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:26.062447  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:26.062473  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:26.062494  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:26.140655  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:26.140706  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:26.178861  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:26.178894  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:26.229647  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:26.229686  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:28.744149  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:28.757328  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:28.757401  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:28.794616  726389 cri.go:89] found id: ""
	I1025 22:55:28.794647  726389 logs.go:282] 0 containers: []
	W1025 22:55:28.794659  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:28.794668  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:28.794740  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:28.827048  726389 cri.go:89] found id: ""
	I1025 22:55:28.827081  726389 logs.go:282] 0 containers: []
	W1025 22:55:28.827092  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:28.827100  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:28.827173  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:28.859014  726389 cri.go:89] found id: ""
	I1025 22:55:28.859051  726389 logs.go:282] 0 containers: []
	W1025 22:55:28.859064  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:28.859073  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:28.859148  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:28.896459  726389 cri.go:89] found id: ""
	I1025 22:55:28.896488  726389 logs.go:282] 0 containers: []
	W1025 22:55:28.896496  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:28.896501  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:28.896551  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:28.932786  726389 cri.go:89] found id: ""
	I1025 22:55:28.932814  726389 logs.go:282] 0 containers: []
	W1025 22:55:28.932827  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:28.932835  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:28.932898  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:28.967967  726389 cri.go:89] found id: ""
	I1025 22:55:28.968004  726389 logs.go:282] 0 containers: []
	W1025 22:55:28.968016  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:28.968025  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:28.968094  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:29.001479  726389 cri.go:89] found id: ""
	I1025 22:55:29.001511  726389 logs.go:282] 0 containers: []
	W1025 22:55:29.001520  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:29.001531  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:29.001599  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:29.036717  726389 cri.go:89] found id: ""
	I1025 22:55:29.036744  726389 logs.go:282] 0 containers: []
	W1025 22:55:29.036752  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:29.036763  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:29.036778  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:29.114496  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:29.114537  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:29.154536  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:29.154566  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:29.207157  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:29.207200  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:29.220705  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:29.220735  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:29.297462  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:31.798075  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:31.811481  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:31.811545  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:31.844213  726389 cri.go:89] found id: ""
	I1025 22:55:31.844254  726389 logs.go:282] 0 containers: []
	W1025 22:55:31.844265  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:31.844275  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:31.844345  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:31.877884  726389 cri.go:89] found id: ""
	I1025 22:55:31.877913  726389 logs.go:282] 0 containers: []
	W1025 22:55:31.877921  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:31.877928  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:31.877977  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:31.914005  726389 cri.go:89] found id: ""
	I1025 22:55:31.914047  726389 logs.go:282] 0 containers: []
	W1025 22:55:31.914058  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:31.914066  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:31.914139  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:31.949905  726389 cri.go:89] found id: ""
	I1025 22:55:31.949938  726389 logs.go:282] 0 containers: []
	W1025 22:55:31.949947  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:31.949953  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:31.950006  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:31.984870  726389 cri.go:89] found id: ""
	I1025 22:55:31.984906  726389 logs.go:282] 0 containers: []
	W1025 22:55:31.984918  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:31.984927  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:31.985013  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:32.024773  726389 cri.go:89] found id: ""
	I1025 22:55:32.024807  726389 logs.go:282] 0 containers: []
	W1025 22:55:32.024818  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:32.024826  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:32.024890  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:32.059557  726389 cri.go:89] found id: ""
	I1025 22:55:32.059586  726389 logs.go:282] 0 containers: []
	W1025 22:55:32.059595  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:32.059602  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:32.059665  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:32.103489  726389 cri.go:89] found id: ""
	I1025 22:55:32.103528  726389 logs.go:282] 0 containers: []
	W1025 22:55:32.103540  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:32.103552  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:32.103568  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:32.142622  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:32.142676  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:32.199975  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:32.200026  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:32.214018  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:32.214056  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:32.288328  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:32.288373  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:32.288390  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:34.865883  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:34.880340  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:34.880420  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:34.916600  726389 cri.go:89] found id: ""
	I1025 22:55:34.916632  726389 logs.go:282] 0 containers: []
	W1025 22:55:34.916642  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:34.916651  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:34.916717  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:34.954264  726389 cri.go:89] found id: ""
	I1025 22:55:34.954291  726389 logs.go:282] 0 containers: []
	W1025 22:55:34.954299  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:34.954305  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:34.954367  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:34.991065  726389 cri.go:89] found id: ""
	I1025 22:55:34.991096  726389 logs.go:282] 0 containers: []
	W1025 22:55:34.991106  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:34.991114  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:34.991189  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:35.028828  726389 cri.go:89] found id: ""
	I1025 22:55:35.028857  726389 logs.go:282] 0 containers: []
	W1025 22:55:35.028867  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:35.028875  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:35.028939  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:35.066877  726389 cri.go:89] found id: ""
	I1025 22:55:35.066909  726389 logs.go:282] 0 containers: []
	W1025 22:55:35.066920  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:35.066927  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:35.066994  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:35.102151  726389 cri.go:89] found id: ""
	I1025 22:55:35.102192  726389 logs.go:282] 0 containers: []
	W1025 22:55:35.102205  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:35.102216  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:35.102281  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:35.139413  726389 cri.go:89] found id: ""
	I1025 22:55:35.139447  726389 logs.go:282] 0 containers: []
	W1025 22:55:35.139456  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:35.139463  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:35.139524  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:35.176898  726389 cri.go:89] found id: ""
	I1025 22:55:35.176935  726389 logs.go:282] 0 containers: []
	W1025 22:55:35.176945  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:35.176972  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:35.176990  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:35.247000  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:35.247030  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:35.247047  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:35.332090  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:35.332136  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:35.376520  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:35.376549  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:35.433973  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:35.434024  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:37.948404  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:37.962972  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:37.963045  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:38.000326  726389 cri.go:89] found id: ""
	I1025 22:55:38.000366  726389 logs.go:282] 0 containers: []
	W1025 22:55:38.000378  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:38.000388  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:38.000465  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:38.035391  726389 cri.go:89] found id: ""
	I1025 22:55:38.035423  726389 logs.go:282] 0 containers: []
	W1025 22:55:38.035441  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:38.035450  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:38.035523  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:38.070527  726389 cri.go:89] found id: ""
	I1025 22:55:38.070555  726389 logs.go:282] 0 containers: []
	W1025 22:55:38.070563  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:38.070570  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:38.070623  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:38.104168  726389 cri.go:89] found id: ""
	I1025 22:55:38.104199  726389 logs.go:282] 0 containers: []
	W1025 22:55:38.104208  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:38.104214  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:38.104264  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:38.139228  726389 cri.go:89] found id: ""
	I1025 22:55:38.139256  726389 logs.go:282] 0 containers: []
	W1025 22:55:38.139264  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:38.139271  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:38.139322  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:38.179021  726389 cri.go:89] found id: ""
	I1025 22:55:38.179056  726389 logs.go:282] 0 containers: []
	W1025 22:55:38.179067  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:38.179075  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:38.179138  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:38.216568  726389 cri.go:89] found id: ""
	I1025 22:55:38.216596  726389 logs.go:282] 0 containers: []
	W1025 22:55:38.216608  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:38.216621  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:38.216682  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:38.250762  726389 cri.go:89] found id: ""
	I1025 22:55:38.250794  726389 logs.go:282] 0 containers: []
	W1025 22:55:38.250803  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:38.250813  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:38.250825  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:38.300398  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:38.300453  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:38.314464  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:38.314499  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:38.390227  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:38.390254  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:38.390270  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:38.467529  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:38.467571  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:41.011917  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:41.025435  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:41.025499  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:41.063162  726389 cri.go:89] found id: ""
	I1025 22:55:41.063197  726389 logs.go:282] 0 containers: []
	W1025 22:55:41.063210  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:41.063219  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:41.063284  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:41.097689  726389 cri.go:89] found id: ""
	I1025 22:55:41.097726  726389 logs.go:282] 0 containers: []
	W1025 22:55:41.097738  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:41.097747  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:41.097805  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:41.131377  726389 cri.go:89] found id: ""
	I1025 22:55:41.131408  726389 logs.go:282] 0 containers: []
	W1025 22:55:41.131416  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:41.131423  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:41.131475  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:41.165338  726389 cri.go:89] found id: ""
	I1025 22:55:41.165372  726389 logs.go:282] 0 containers: []
	W1025 22:55:41.165385  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:41.165394  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:41.165467  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:41.200064  726389 cri.go:89] found id: ""
	I1025 22:55:41.200105  726389 logs.go:282] 0 containers: []
	W1025 22:55:41.200117  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:41.200125  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:41.200192  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:41.234270  726389 cri.go:89] found id: ""
	I1025 22:55:41.234293  726389 logs.go:282] 0 containers: []
	W1025 22:55:41.234302  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:41.234310  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:41.234376  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:41.277025  726389 cri.go:89] found id: ""
	I1025 22:55:41.277059  726389 logs.go:282] 0 containers: []
	W1025 22:55:41.277071  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:41.277079  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:41.277143  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:41.322881  726389 cri.go:89] found id: ""
	I1025 22:55:41.322913  726389 logs.go:282] 0 containers: []
	W1025 22:55:41.322925  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:41.322938  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:41.322953  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:41.380653  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:41.380712  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:41.397457  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:41.397485  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:41.479554  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:41.479574  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:41.479585  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:41.558747  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:41.558798  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:44.105387  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:44.118543  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:44.118601  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:44.155477  726389 cri.go:89] found id: ""
	I1025 22:55:44.155512  726389 logs.go:282] 0 containers: []
	W1025 22:55:44.155523  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:44.155530  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:44.155595  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:44.191182  726389 cri.go:89] found id: ""
	I1025 22:55:44.191217  726389 logs.go:282] 0 containers: []
	W1025 22:55:44.191229  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:44.191238  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:44.191298  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:44.227485  726389 cri.go:89] found id: ""
	I1025 22:55:44.227513  726389 logs.go:282] 0 containers: []
	W1025 22:55:44.227520  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:44.227526  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:44.227579  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:44.262384  726389 cri.go:89] found id: ""
	I1025 22:55:44.262414  726389 logs.go:282] 0 containers: []
	W1025 22:55:44.262425  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:44.262434  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:44.262505  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:44.296588  726389 cri.go:89] found id: ""
	I1025 22:55:44.296622  726389 logs.go:282] 0 containers: []
	W1025 22:55:44.296633  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:44.296642  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:44.296709  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:44.332939  726389 cri.go:89] found id: ""
	I1025 22:55:44.332982  726389 logs.go:282] 0 containers: []
	W1025 22:55:44.332992  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:44.333001  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:44.333070  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:44.371115  726389 cri.go:89] found id: ""
	I1025 22:55:44.371150  726389 logs.go:282] 0 containers: []
	W1025 22:55:44.371169  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:44.371178  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:44.371248  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:44.409328  726389 cri.go:89] found id: ""
	I1025 22:55:44.409359  726389 logs.go:282] 0 containers: []
	W1025 22:55:44.409370  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:44.409382  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:44.409393  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:44.449793  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:44.449826  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:44.500233  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:44.500271  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:44.513832  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:44.513863  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:44.595121  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:44.595152  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:44.595171  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:47.168908  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:47.183159  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:47.183245  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:47.228814  726389 cri.go:89] found id: ""
	I1025 22:55:47.228849  726389 logs.go:282] 0 containers: []
	W1025 22:55:47.228868  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:47.228879  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:47.228950  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:47.267120  726389 cri.go:89] found id: ""
	I1025 22:55:47.267149  726389 logs.go:282] 0 containers: []
	W1025 22:55:47.267161  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:47.267169  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:47.267241  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:47.302202  726389 cri.go:89] found id: ""
	I1025 22:55:47.302230  726389 logs.go:282] 0 containers: []
	W1025 22:55:47.302239  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:47.302245  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:47.302305  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:47.335672  726389 cri.go:89] found id: ""
	I1025 22:55:47.335701  726389 logs.go:282] 0 containers: []
	W1025 22:55:47.335714  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:47.335720  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:47.335771  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:47.375733  726389 cri.go:89] found id: ""
	I1025 22:55:47.375772  726389 logs.go:282] 0 containers: []
	W1025 22:55:47.375785  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:47.375793  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:47.375886  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:47.414327  726389 cri.go:89] found id: ""
	I1025 22:55:47.414361  726389 logs.go:282] 0 containers: []
	W1025 22:55:47.414372  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:47.414381  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:47.414457  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:47.452039  726389 cri.go:89] found id: ""
	I1025 22:55:47.452075  726389 logs.go:282] 0 containers: []
	W1025 22:55:47.452088  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:47.452095  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:47.452162  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:47.489416  726389 cri.go:89] found id: ""
	I1025 22:55:47.489455  726389 logs.go:282] 0 containers: []
	W1025 22:55:47.489467  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:47.489480  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:47.489496  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:47.546922  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:47.546962  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:47.560424  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:47.560458  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:47.640530  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:47.640558  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:47.640578  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:47.726417  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:47.726463  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:50.270645  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:50.284535  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:50.284617  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:50.317618  726389 cri.go:89] found id: ""
	I1025 22:55:50.317652  726389 logs.go:282] 0 containers: []
	W1025 22:55:50.317664  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:50.317673  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:50.317734  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:50.354254  726389 cri.go:89] found id: ""
	I1025 22:55:50.354282  726389 logs.go:282] 0 containers: []
	W1025 22:55:50.354291  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:50.354297  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:50.354353  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:50.392286  726389 cri.go:89] found id: ""
	I1025 22:55:50.392316  726389 logs.go:282] 0 containers: []
	W1025 22:55:50.392324  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:50.392331  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:50.392396  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:50.428633  726389 cri.go:89] found id: ""
	I1025 22:55:50.428668  726389 logs.go:282] 0 containers: []
	W1025 22:55:50.428680  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:50.428694  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:50.428762  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:50.465065  726389 cri.go:89] found id: ""
	I1025 22:55:50.465093  726389 logs.go:282] 0 containers: []
	W1025 22:55:50.465102  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:50.465108  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:50.465168  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:50.509163  726389 cri.go:89] found id: ""
	I1025 22:55:50.509199  726389 logs.go:282] 0 containers: []
	W1025 22:55:50.509211  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:50.509220  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:50.509294  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:50.545797  726389 cri.go:89] found id: ""
	I1025 22:55:50.545827  726389 logs.go:282] 0 containers: []
	W1025 22:55:50.545838  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:50.545846  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:50.545915  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:50.583113  726389 cri.go:89] found id: ""
	I1025 22:55:50.583142  726389 logs.go:282] 0 containers: []
	W1025 22:55:50.583153  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:50.583165  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:50.583181  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:50.634504  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:50.634545  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:50.649072  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:50.649109  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:50.726036  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:50.726065  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:50.726083  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:50.802445  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:50.802490  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:53.341083  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:53.355200  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:53.355292  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:53.389170  726389 cri.go:89] found id: ""
	I1025 22:55:53.389201  726389 logs.go:282] 0 containers: []
	W1025 22:55:53.389213  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:53.389221  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:53.389283  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:53.425006  726389 cri.go:89] found id: ""
	I1025 22:55:53.425040  726389 logs.go:282] 0 containers: []
	W1025 22:55:53.425048  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:53.425055  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:53.425106  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:53.460834  726389 cri.go:89] found id: ""
	I1025 22:55:53.460870  726389 logs.go:282] 0 containers: []
	W1025 22:55:53.460880  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:53.460889  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:53.460984  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:53.496146  726389 cri.go:89] found id: ""
	I1025 22:55:53.496178  726389 logs.go:282] 0 containers: []
	W1025 22:55:53.496191  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:53.496199  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:53.496267  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:53.532490  726389 cri.go:89] found id: ""
	I1025 22:55:53.532519  726389 logs.go:282] 0 containers: []
	W1025 22:55:53.532530  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:53.532537  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:53.532606  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:53.568112  726389 cri.go:89] found id: ""
	I1025 22:55:53.568147  726389 logs.go:282] 0 containers: []
	W1025 22:55:53.568158  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:53.568167  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:53.568234  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:53.602684  726389 cri.go:89] found id: ""
	I1025 22:55:53.602717  726389 logs.go:282] 0 containers: []
	W1025 22:55:53.602729  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:53.602738  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:53.602807  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:53.636435  726389 cri.go:89] found id: ""
	I1025 22:55:53.636468  726389 logs.go:282] 0 containers: []
	W1025 22:55:53.636477  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:53.636488  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:53.636500  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:53.689596  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:53.689644  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:53.703157  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:53.703190  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:53.768804  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:53.768837  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:53.768853  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:53.844857  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:53.844914  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:56.388092  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:56.403395  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:56.403483  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:56.442945  726389 cri.go:89] found id: ""
	I1025 22:55:56.442977  726389 logs.go:282] 0 containers: []
	W1025 22:55:56.442985  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:56.442992  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:56.443043  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:56.490412  726389 cri.go:89] found id: ""
	I1025 22:55:56.490446  726389 logs.go:282] 0 containers: []
	W1025 22:55:56.490458  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:56.490466  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:56.490531  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:56.526550  726389 cri.go:89] found id: ""
	I1025 22:55:56.526584  726389 logs.go:282] 0 containers: []
	W1025 22:55:56.526595  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:56.526602  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:56.526653  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:56.562247  726389 cri.go:89] found id: ""
	I1025 22:55:56.562277  726389 logs.go:282] 0 containers: []
	W1025 22:55:56.562285  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:56.562291  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:56.562348  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:56.597075  726389 cri.go:89] found id: ""
	I1025 22:55:56.597104  726389 logs.go:282] 0 containers: []
	W1025 22:55:56.597116  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:56.597124  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:56.597189  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:56.631734  726389 cri.go:89] found id: ""
	I1025 22:55:56.631765  726389 logs.go:282] 0 containers: []
	W1025 22:55:56.631774  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:56.631780  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:56.631832  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:56.664980  726389 cri.go:89] found id: ""
	I1025 22:55:56.665018  726389 logs.go:282] 0 containers: []
	W1025 22:55:56.665029  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:56.665038  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:56.665103  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:56.702354  726389 cri.go:89] found id: ""
	I1025 22:55:56.702387  726389 logs.go:282] 0 containers: []
	W1025 22:55:56.702399  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:56.702412  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:56.702429  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:56.774036  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:56.774060  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:56.774073  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:55:56.850211  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:55:56.850251  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:55:56.893559  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:56.893596  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:56.952238  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:56.952275  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:59.466158  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:55:59.479757  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:55:59.479824  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:55:59.519660  726389 cri.go:89] found id: ""
	I1025 22:55:59.519694  726389 logs.go:282] 0 containers: []
	W1025 22:55:59.519705  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:55:59.519713  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:55:59.519774  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:55:59.564040  726389 cri.go:89] found id: ""
	I1025 22:55:59.564069  726389 logs.go:282] 0 containers: []
	W1025 22:55:59.564077  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:55:59.564083  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:55:59.564134  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:55:59.600579  726389 cri.go:89] found id: ""
	I1025 22:55:59.600612  726389 logs.go:282] 0 containers: []
	W1025 22:55:59.600621  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:55:59.600628  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:55:59.600686  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:55:59.640437  726389 cri.go:89] found id: ""
	I1025 22:55:59.640466  726389 logs.go:282] 0 containers: []
	W1025 22:55:59.640474  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:55:59.640480  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:55:59.640530  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:55:59.683462  726389 cri.go:89] found id: ""
	I1025 22:55:59.683489  726389 logs.go:282] 0 containers: []
	W1025 22:55:59.683497  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:55:59.683503  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:55:59.683563  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:55:59.718823  726389 cri.go:89] found id: ""
	I1025 22:55:59.718850  726389 logs.go:282] 0 containers: []
	W1025 22:55:59.718859  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:55:59.718867  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:55:59.718923  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:55:59.771739  726389 cri.go:89] found id: ""
	I1025 22:55:59.771767  726389 logs.go:282] 0 containers: []
	W1025 22:55:59.771776  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:55:59.771782  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:55:59.771834  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:55:59.806640  726389 cri.go:89] found id: ""
	I1025 22:55:59.806670  726389 logs.go:282] 0 containers: []
	W1025 22:55:59.806682  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:55:59.806694  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:55:59.806709  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:55:59.854218  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:55:59.854269  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:55:59.868980  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:55:59.869014  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:55:59.939860  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:55:59.939890  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:55:59.939906  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:00.013863  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:00.013898  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:02.555893  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:02.573541  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:02.573623  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:02.618411  726389 cri.go:89] found id: ""
	I1025 22:56:02.618446  726389 logs.go:282] 0 containers: []
	W1025 22:56:02.618457  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:02.618466  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:02.618533  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:02.662029  726389 cri.go:89] found id: ""
	I1025 22:56:02.662056  726389 logs.go:282] 0 containers: []
	W1025 22:56:02.662066  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:02.662074  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:02.662135  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:02.701121  726389 cri.go:89] found id: ""
	I1025 22:56:02.701161  726389 logs.go:282] 0 containers: []
	W1025 22:56:02.701174  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:02.701183  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:02.701249  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:02.738059  726389 cri.go:89] found id: ""
	I1025 22:56:02.738093  726389 logs.go:282] 0 containers: []
	W1025 22:56:02.738108  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:02.738117  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:02.738181  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:02.781729  726389 cri.go:89] found id: ""
	I1025 22:56:02.781762  726389 logs.go:282] 0 containers: []
	W1025 22:56:02.781773  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:02.781781  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:02.781850  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:02.817031  726389 cri.go:89] found id: ""
	I1025 22:56:02.817063  726389 logs.go:282] 0 containers: []
	W1025 22:56:02.817074  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:02.817084  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:02.817140  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:02.860226  726389 cri.go:89] found id: ""
	I1025 22:56:02.860256  726389 logs.go:282] 0 containers: []
	W1025 22:56:02.860264  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:02.860270  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:02.860330  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:02.898198  726389 cri.go:89] found id: ""
	I1025 22:56:02.898233  726389 logs.go:282] 0 containers: []
	W1025 22:56:02.898245  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:02.898257  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:02.898273  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:02.913684  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:02.913720  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:02.990293  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:02.990330  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:02.990346  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:03.075761  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:03.075807  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:03.117855  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:03.117884  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:05.679965  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:05.693682  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:05.693746  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:05.738310  726389 cri.go:89] found id: ""
	I1025 22:56:05.738368  726389 logs.go:282] 0 containers: []
	W1025 22:56:05.738382  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:05.738391  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:05.738463  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:05.784429  726389 cri.go:89] found id: ""
	I1025 22:56:05.784464  726389 logs.go:282] 0 containers: []
	W1025 22:56:05.784477  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:05.784488  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:05.784543  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:05.832760  726389 cri.go:89] found id: ""
	I1025 22:56:05.832795  726389 logs.go:282] 0 containers: []
	W1025 22:56:05.832805  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:05.832813  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:05.832866  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:05.877090  726389 cri.go:89] found id: ""
	I1025 22:56:05.877117  726389 logs.go:282] 0 containers: []
	W1025 22:56:05.877127  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:05.877215  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:05.877546  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:05.918619  726389 cri.go:89] found id: ""
	I1025 22:56:05.918644  726389 logs.go:282] 0 containers: []
	W1025 22:56:05.918654  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:05.918661  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:05.918712  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:05.954285  726389 cri.go:89] found id: ""
	I1025 22:56:05.954310  726389 logs.go:282] 0 containers: []
	W1025 22:56:05.954318  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:05.954325  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:05.954389  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:05.999082  726389 cri.go:89] found id: ""
	I1025 22:56:05.999114  726389 logs.go:282] 0 containers: []
	W1025 22:56:05.999126  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:05.999134  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:05.999205  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:06.042029  726389 cri.go:89] found id: ""
	I1025 22:56:06.042066  726389 logs.go:282] 0 containers: []
	W1025 22:56:06.042078  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:06.042091  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:06.042109  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:06.124327  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:06.124374  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:06.188895  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:06.188925  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:06.274168  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:06.274213  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:06.290891  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:06.290930  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:06.379189  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:08.879510  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:08.899361  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:08.899446  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:08.937568  726389 cri.go:89] found id: ""
	I1025 22:56:08.937591  726389 logs.go:282] 0 containers: []
	W1025 22:56:08.937600  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:08.937605  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:08.937672  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:08.970746  726389 cri.go:89] found id: ""
	I1025 22:56:08.970778  726389 logs.go:282] 0 containers: []
	W1025 22:56:08.970789  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:08.970798  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:08.970882  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:09.006043  726389 cri.go:89] found id: ""
	I1025 22:56:09.006079  726389 logs.go:282] 0 containers: []
	W1025 22:56:09.006092  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:09.006101  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:09.006166  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:09.044051  726389 cri.go:89] found id: ""
	I1025 22:56:09.044080  726389 logs.go:282] 0 containers: []
	W1025 22:56:09.044093  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:09.044101  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:09.044164  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:09.083114  726389 cri.go:89] found id: ""
	I1025 22:56:09.083148  726389 logs.go:282] 0 containers: []
	W1025 22:56:09.083157  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:09.083163  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:09.083222  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:09.116850  726389 cri.go:89] found id: ""
	I1025 22:56:09.116883  726389 logs.go:282] 0 containers: []
	W1025 22:56:09.116893  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:09.116900  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:09.116981  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:09.152993  726389 cri.go:89] found id: ""
	I1025 22:56:09.153024  726389 logs.go:282] 0 containers: []
	W1025 22:56:09.153035  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:09.153044  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:09.153105  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:09.190526  726389 cri.go:89] found id: ""
	I1025 22:56:09.190554  726389 logs.go:282] 0 containers: []
	W1025 22:56:09.190563  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:09.190572  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:09.190584  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:09.242076  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:09.242117  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:09.256145  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:09.256183  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:09.332468  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:09.332497  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:09.332513  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:09.417198  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:09.417241  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:11.960081  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:11.976076  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:11.976171  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:12.022338  726389 cri.go:89] found id: ""
	I1025 22:56:12.022370  726389 logs.go:282] 0 containers: []
	W1025 22:56:12.022379  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:12.022386  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:12.022449  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:12.085855  726389 cri.go:89] found id: ""
	I1025 22:56:12.085890  726389 logs.go:282] 0 containers: []
	W1025 22:56:12.085900  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:12.085909  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:12.085975  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:12.131161  726389 cri.go:89] found id: ""
	I1025 22:56:12.131186  726389 logs.go:282] 0 containers: []
	W1025 22:56:12.131195  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:12.131203  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:12.131259  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:12.167150  726389 cri.go:89] found id: ""
	I1025 22:56:12.167189  726389 logs.go:282] 0 containers: []
	W1025 22:56:12.167201  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:12.167211  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:12.167276  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:12.208918  726389 cri.go:89] found id: ""
	I1025 22:56:12.208966  726389 logs.go:282] 0 containers: []
	W1025 22:56:12.208979  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:12.208987  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:12.209057  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:12.245972  726389 cri.go:89] found id: ""
	I1025 22:56:12.246002  726389 logs.go:282] 0 containers: []
	W1025 22:56:12.246011  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:12.246017  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:12.246069  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:12.283946  726389 cri.go:89] found id: ""
	I1025 22:56:12.283984  726389 logs.go:282] 0 containers: []
	W1025 22:56:12.283996  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:12.284005  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:12.284071  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:12.322444  726389 cri.go:89] found id: ""
	I1025 22:56:12.322471  726389 logs.go:282] 0 containers: []
	W1025 22:56:12.322479  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:12.322495  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:12.322507  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:12.408408  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:12.408454  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:12.454678  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:12.454714  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:12.509693  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:12.509733  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:12.525234  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:12.525265  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:12.597615  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:15.097778  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:15.111570  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:15.111660  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:15.147659  726389 cri.go:89] found id: ""
	I1025 22:56:15.147692  726389 logs.go:282] 0 containers: []
	W1025 22:56:15.147702  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:15.147710  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:15.147775  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:15.184145  726389 cri.go:89] found id: ""
	I1025 22:56:15.184178  726389 logs.go:282] 0 containers: []
	W1025 22:56:15.184189  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:15.184198  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:15.184291  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:15.225450  726389 cri.go:89] found id: ""
	I1025 22:56:15.225489  726389 logs.go:282] 0 containers: []
	W1025 22:56:15.225501  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:15.225509  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:15.225587  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:15.270106  726389 cri.go:89] found id: ""
	I1025 22:56:15.270138  726389 logs.go:282] 0 containers: []
	W1025 22:56:15.270150  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:15.270159  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:15.270227  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:15.306087  726389 cri.go:89] found id: ""
	I1025 22:56:15.306124  726389 logs.go:282] 0 containers: []
	W1025 22:56:15.306134  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:15.306141  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:15.306216  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:15.341732  726389 cri.go:89] found id: ""
	I1025 22:56:15.341767  726389 logs.go:282] 0 containers: []
	W1025 22:56:15.341778  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:15.341798  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:15.341882  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:15.376754  726389 cri.go:89] found id: ""
	I1025 22:56:15.376782  726389 logs.go:282] 0 containers: []
	W1025 22:56:15.376793  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:15.376802  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:15.376861  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:15.410225  726389 cri.go:89] found id: ""
	I1025 22:56:15.410255  726389 logs.go:282] 0 containers: []
	W1025 22:56:15.410266  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:15.410280  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:15.410296  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:15.462125  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:15.462172  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:15.476851  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:15.476886  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:15.549178  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:15.549205  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:15.549221  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:15.628663  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:15.628709  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:18.168169  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:18.186503  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:18.186608  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:18.230864  726389 cri.go:89] found id: ""
	I1025 22:56:18.230904  726389 logs.go:282] 0 containers: []
	W1025 22:56:18.230917  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:18.230926  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:18.230996  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:18.267480  726389 cri.go:89] found id: ""
	I1025 22:56:18.267512  726389 logs.go:282] 0 containers: []
	W1025 22:56:18.267524  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:18.267533  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:18.267622  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:18.306907  726389 cri.go:89] found id: ""
	I1025 22:56:18.306941  726389 logs.go:282] 0 containers: []
	W1025 22:56:18.306952  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:18.306963  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:18.307036  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:18.343079  726389 cri.go:89] found id: ""
	I1025 22:56:18.343117  726389 logs.go:282] 0 containers: []
	W1025 22:56:18.343129  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:18.343137  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:18.343201  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:18.382535  726389 cri.go:89] found id: ""
	I1025 22:56:18.382569  726389 logs.go:282] 0 containers: []
	W1025 22:56:18.382581  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:18.382589  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:18.382657  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:18.427427  726389 cri.go:89] found id: ""
	I1025 22:56:18.427453  726389 logs.go:282] 0 containers: []
	W1025 22:56:18.427463  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:18.427470  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:18.427522  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:18.466591  726389 cri.go:89] found id: ""
	I1025 22:56:18.466626  726389 logs.go:282] 0 containers: []
	W1025 22:56:18.466637  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:18.466645  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:18.466716  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:18.500376  726389 cri.go:89] found id: ""
	I1025 22:56:18.500414  726389 logs.go:282] 0 containers: []
	W1025 22:56:18.500427  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:18.500443  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:18.500462  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:18.600366  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:18.600418  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:18.642465  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:18.642509  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:18.709243  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:18.709289  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:18.724614  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:18.724656  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:18.797677  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:21.298157  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:21.311804  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:21.311954  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:21.344567  726389 cri.go:89] found id: ""
	I1025 22:56:21.344598  726389 logs.go:282] 0 containers: []
	W1025 22:56:21.344607  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:21.344614  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:21.344668  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:21.382836  726389 cri.go:89] found id: ""
	I1025 22:56:21.382873  726389 logs.go:282] 0 containers: []
	W1025 22:56:21.382886  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:21.382894  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:21.382956  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:21.423068  726389 cri.go:89] found id: ""
	I1025 22:56:21.423100  726389 logs.go:282] 0 containers: []
	W1025 22:56:21.423111  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:21.423120  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:21.423179  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:21.471275  726389 cri.go:89] found id: ""
	I1025 22:56:21.471305  726389 logs.go:282] 0 containers: []
	W1025 22:56:21.471328  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:21.471346  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:21.471411  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:21.520871  726389 cri.go:89] found id: ""
	I1025 22:56:21.520896  726389 logs.go:282] 0 containers: []
	W1025 22:56:21.520903  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:21.520909  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:21.520979  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:21.565184  726389 cri.go:89] found id: ""
	I1025 22:56:21.565209  726389 logs.go:282] 0 containers: []
	W1025 22:56:21.565219  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:21.565227  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:21.565281  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:21.614202  726389 cri.go:89] found id: ""
	I1025 22:56:21.614234  726389 logs.go:282] 0 containers: []
	W1025 22:56:21.614246  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:21.614255  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:21.614319  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:21.660058  726389 cri.go:89] found id: ""
	I1025 22:56:21.660110  726389 logs.go:282] 0 containers: []
	W1025 22:56:21.660123  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:21.660137  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:21.660156  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:21.731060  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:21.731095  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:21.747797  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:21.747833  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:21.830140  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:21.830225  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:21.830255  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:21.923315  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:21.923355  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:24.472198  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:24.485062  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:24.485127  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:24.523870  726389 cri.go:89] found id: ""
	I1025 22:56:24.523894  726389 logs.go:282] 0 containers: []
	W1025 22:56:24.523902  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:24.523909  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:24.523962  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:24.562444  726389 cri.go:89] found id: ""
	I1025 22:56:24.562473  726389 logs.go:282] 0 containers: []
	W1025 22:56:24.562483  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:24.562490  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:24.562549  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:24.601643  726389 cri.go:89] found id: ""
	I1025 22:56:24.601677  726389 logs.go:282] 0 containers: []
	W1025 22:56:24.601688  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:24.601697  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:24.601762  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:24.636495  726389 cri.go:89] found id: ""
	I1025 22:56:24.636525  726389 logs.go:282] 0 containers: []
	W1025 22:56:24.636536  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:24.636544  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:24.636610  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:24.671388  726389 cri.go:89] found id: ""
	I1025 22:56:24.671421  726389 logs.go:282] 0 containers: []
	W1025 22:56:24.671431  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:24.671437  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:24.671500  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:24.703550  726389 cri.go:89] found id: ""
	I1025 22:56:24.703578  726389 logs.go:282] 0 containers: []
	W1025 22:56:24.703586  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:24.703592  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:24.703648  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:24.739987  726389 cri.go:89] found id: ""
	I1025 22:56:24.740020  726389 logs.go:282] 0 containers: []
	W1025 22:56:24.740032  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:24.740040  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:24.740097  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:24.776672  726389 cri.go:89] found id: ""
	I1025 22:56:24.776705  726389 logs.go:282] 0 containers: []
	W1025 22:56:24.776716  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:24.776729  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:24.776746  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:24.853443  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:24.853468  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:24.853484  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:24.931141  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:24.931182  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:24.967153  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:24.967184  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:25.016968  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:25.017004  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:27.532830  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:27.546618  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:27.546694  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:27.580816  726389 cri.go:89] found id: ""
	I1025 22:56:27.580852  726389 logs.go:282] 0 containers: []
	W1025 22:56:27.580864  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:27.580873  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:27.580942  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:27.619673  726389 cri.go:89] found id: ""
	I1025 22:56:27.619709  726389 logs.go:282] 0 containers: []
	W1025 22:56:27.619720  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:27.619728  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:27.619793  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:27.652012  726389 cri.go:89] found id: ""
	I1025 22:56:27.652041  726389 logs.go:282] 0 containers: []
	W1025 22:56:27.652049  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:27.652055  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:27.652114  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:27.685641  726389 cri.go:89] found id: ""
	I1025 22:56:27.685675  726389 logs.go:282] 0 containers: []
	W1025 22:56:27.685687  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:27.685695  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:27.685759  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:27.719973  726389 cri.go:89] found id: ""
	I1025 22:56:27.720007  726389 logs.go:282] 0 containers: []
	W1025 22:56:27.720018  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:27.720027  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:27.720099  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:27.757253  726389 cri.go:89] found id: ""
	I1025 22:56:27.757287  726389 logs.go:282] 0 containers: []
	W1025 22:56:27.757296  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:27.757303  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:27.757361  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:27.798758  726389 cri.go:89] found id: ""
	I1025 22:56:27.798789  726389 logs.go:282] 0 containers: []
	W1025 22:56:27.798801  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:27.798809  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:27.798874  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:27.834072  726389 cri.go:89] found id: ""
	I1025 22:56:27.834105  726389 logs.go:282] 0 containers: []
	W1025 22:56:27.834116  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:27.834129  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:27.834144  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:27.847743  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:27.847775  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:27.914709  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:27.914730  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:27.914744  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:27.994190  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:27.994230  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:28.035350  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:28.035380  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:30.585543  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:30.598335  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:30.598399  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:30.639103  726389 cri.go:89] found id: ""
	I1025 22:56:30.639137  726389 logs.go:282] 0 containers: []
	W1025 22:56:30.639145  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:30.639151  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:30.639216  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:30.672922  726389 cri.go:89] found id: ""
	I1025 22:56:30.672968  726389 logs.go:282] 0 containers: []
	W1025 22:56:30.672983  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:30.672991  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:30.673057  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:30.705701  726389 cri.go:89] found id: ""
	I1025 22:56:30.705733  726389 logs.go:282] 0 containers: []
	W1025 22:56:30.705741  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:30.705747  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:30.705814  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:30.741068  726389 cri.go:89] found id: ""
	I1025 22:56:30.741094  726389 logs.go:282] 0 containers: []
	W1025 22:56:30.741103  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:30.741109  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:30.741175  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:30.775659  726389 cri.go:89] found id: ""
	I1025 22:56:30.775688  726389 logs.go:282] 0 containers: []
	W1025 22:56:30.775696  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:30.775702  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:30.775765  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:30.809046  726389 cri.go:89] found id: ""
	I1025 22:56:30.809079  726389 logs.go:282] 0 containers: []
	W1025 22:56:30.809090  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:30.809099  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:30.809167  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:30.840262  726389 cri.go:89] found id: ""
	I1025 22:56:30.840297  726389 logs.go:282] 0 containers: []
	W1025 22:56:30.840309  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:30.840316  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:30.840379  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:30.878817  726389 cri.go:89] found id: ""
	I1025 22:56:30.878850  726389 logs.go:282] 0 containers: []
	W1025 22:56:30.878860  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:30.878871  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:30.878886  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:30.927887  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:30.927927  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:30.942303  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:30.942340  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:31.022654  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:31.022690  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:31.022707  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:31.100520  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:31.100564  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:33.645078  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:33.659010  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:33.659082  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:33.694614  726389 cri.go:89] found id: ""
	I1025 22:56:33.694643  726389 logs.go:282] 0 containers: []
	W1025 22:56:33.694652  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:33.694658  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:33.694723  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:33.729517  726389 cri.go:89] found id: ""
	I1025 22:56:33.729544  726389 logs.go:282] 0 containers: []
	W1025 22:56:33.729554  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:33.729561  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:33.729617  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:33.766629  726389 cri.go:89] found id: ""
	I1025 22:56:33.766659  726389 logs.go:282] 0 containers: []
	W1025 22:56:33.766671  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:33.766678  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:33.766743  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:33.803762  726389 cri.go:89] found id: ""
	I1025 22:56:33.803790  726389 logs.go:282] 0 containers: []
	W1025 22:56:33.803799  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:33.803805  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:33.803864  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:33.842662  726389 cri.go:89] found id: ""
	I1025 22:56:33.842691  726389 logs.go:282] 0 containers: []
	W1025 22:56:33.842703  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:33.842712  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:33.842773  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:33.882549  726389 cri.go:89] found id: ""
	I1025 22:56:33.882580  726389 logs.go:282] 0 containers: []
	W1025 22:56:33.882591  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:33.882601  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:33.882670  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:33.927013  726389 cri.go:89] found id: ""
	I1025 22:56:33.927043  726389 logs.go:282] 0 containers: []
	W1025 22:56:33.927053  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:33.927061  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:33.927124  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:33.960852  726389 cri.go:89] found id: ""
	I1025 22:56:33.960876  726389 logs.go:282] 0 containers: []
	W1025 22:56:33.960883  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:33.960894  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:33.960908  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:34.011323  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:34.011358  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:34.024919  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:34.024968  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:34.101711  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:34.101736  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:34.101750  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:34.184947  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:34.185022  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:36.728650  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:36.748712  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:36.748796  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:36.795583  726389 cri.go:89] found id: ""
	I1025 22:56:36.795614  726389 logs.go:282] 0 containers: []
	W1025 22:56:36.795626  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:36.795634  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:36.795706  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:36.844766  726389 cri.go:89] found id: ""
	I1025 22:56:36.844793  726389 logs.go:282] 0 containers: []
	W1025 22:56:36.844801  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:36.844807  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:36.844864  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:36.889168  726389 cri.go:89] found id: ""
	I1025 22:56:36.889201  726389 logs.go:282] 0 containers: []
	W1025 22:56:36.889209  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:36.889215  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:36.889266  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:36.931647  726389 cri.go:89] found id: ""
	I1025 22:56:36.931681  726389 logs.go:282] 0 containers: []
	W1025 22:56:36.931693  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:36.931703  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:36.931779  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:36.982440  726389 cri.go:89] found id: ""
	I1025 22:56:36.982471  726389 logs.go:282] 0 containers: []
	W1025 22:56:36.982483  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:36.982491  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:36.982559  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:37.030340  726389 cri.go:89] found id: ""
	I1025 22:56:37.030381  726389 logs.go:282] 0 containers: []
	W1025 22:56:37.030393  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:37.030402  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:37.030467  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:37.080467  726389 cri.go:89] found id: ""
	I1025 22:56:37.080499  726389 logs.go:282] 0 containers: []
	W1025 22:56:37.080512  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:37.080520  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:37.080587  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:37.130102  726389 cri.go:89] found id: ""
	I1025 22:56:37.130132  726389 logs.go:282] 0 containers: []
	W1025 22:56:37.130144  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:37.130157  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:37.130173  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:37.208237  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:37.208287  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:37.225743  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:37.225784  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:37.317685  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:37.317709  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:37.317721  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:37.428675  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:37.428723  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:39.976356  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:39.989421  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:39.989501  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:40.030189  726389 cri.go:89] found id: ""
	I1025 22:56:40.030234  726389 logs.go:282] 0 containers: []
	W1025 22:56:40.030248  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:40.030257  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:40.030345  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:40.066141  726389 cri.go:89] found id: ""
	I1025 22:56:40.066180  726389 logs.go:282] 0 containers: []
	W1025 22:56:40.066193  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:40.066206  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:40.066286  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:40.105777  726389 cri.go:89] found id: ""
	I1025 22:56:40.105817  726389 logs.go:282] 0 containers: []
	W1025 22:56:40.105830  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:40.105846  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:40.105923  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:40.142969  726389 cri.go:89] found id: ""
	I1025 22:56:40.143006  726389 logs.go:282] 0 containers: []
	W1025 22:56:40.143019  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:40.143027  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:40.143094  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:40.177507  726389 cri.go:89] found id: ""
	I1025 22:56:40.177554  726389 logs.go:282] 0 containers: []
	W1025 22:56:40.177570  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:40.177582  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:40.177660  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:40.217420  726389 cri.go:89] found id: ""
	I1025 22:56:40.217457  726389 logs.go:282] 0 containers: []
	W1025 22:56:40.217470  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:40.217480  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:40.217550  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:40.260246  726389 cri.go:89] found id: ""
	I1025 22:56:40.260274  726389 logs.go:282] 0 containers: []
	W1025 22:56:40.260284  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:40.260292  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:40.260363  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:40.297529  726389 cri.go:89] found id: ""
	I1025 22:56:40.297567  726389 logs.go:282] 0 containers: []
	W1025 22:56:40.297580  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:40.297598  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:40.297613  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:40.384491  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:40.384530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:40.424492  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:40.424524  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:40.475063  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:40.475105  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:40.489141  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:40.489172  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:40.575890  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:43.076151  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:43.091214  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:43.091303  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:43.132494  726389 cri.go:89] found id: ""
	I1025 22:56:43.132528  726389 logs.go:282] 0 containers: []
	W1025 22:56:43.132540  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:43.132548  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:43.132613  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:43.167413  726389 cri.go:89] found id: ""
	I1025 22:56:43.167444  726389 logs.go:282] 0 containers: []
	W1025 22:56:43.167455  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:43.167463  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:43.167535  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:43.204279  726389 cri.go:89] found id: ""
	I1025 22:56:43.204311  726389 logs.go:282] 0 containers: []
	W1025 22:56:43.204322  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:43.204334  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:43.204403  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:43.245045  726389 cri.go:89] found id: ""
	I1025 22:56:43.245083  726389 logs.go:282] 0 containers: []
	W1025 22:56:43.245097  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:43.245106  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:43.245171  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:43.285197  726389 cri.go:89] found id: ""
	I1025 22:56:43.285228  726389 logs.go:282] 0 containers: []
	W1025 22:56:43.285239  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:43.285248  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:43.285317  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:43.324464  726389 cri.go:89] found id: ""
	I1025 22:56:43.324498  726389 logs.go:282] 0 containers: []
	W1025 22:56:43.324510  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:43.324519  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:43.324579  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:43.361808  726389 cri.go:89] found id: ""
	I1025 22:56:43.361834  726389 logs.go:282] 0 containers: []
	W1025 22:56:43.361843  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:43.361849  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:43.361901  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:43.399276  726389 cri.go:89] found id: ""
	I1025 22:56:43.399308  726389 logs.go:282] 0 containers: []
	W1025 22:56:43.399319  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:43.399333  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:43.399348  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:43.452985  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:43.453023  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:43.470197  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:43.470234  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:43.543779  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:43.543809  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:43.543825  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:43.625905  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:43.625958  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:46.164917  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:46.179565  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:46.179635  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:46.227095  726389 cri.go:89] found id: ""
	I1025 22:56:46.227133  726389 logs.go:282] 0 containers: []
	W1025 22:56:46.227145  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:46.227154  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:46.227209  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:46.267325  726389 cri.go:89] found id: ""
	I1025 22:56:46.267366  726389 logs.go:282] 0 containers: []
	W1025 22:56:46.267379  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:46.267387  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:46.267447  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:46.315237  726389 cri.go:89] found id: ""
	I1025 22:56:46.315273  726389 logs.go:282] 0 containers: []
	W1025 22:56:46.315286  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:46.315294  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:46.315373  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:46.360490  726389 cri.go:89] found id: ""
	I1025 22:56:46.360522  726389 logs.go:282] 0 containers: []
	W1025 22:56:46.360533  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:46.360542  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:46.360604  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:46.395701  726389 cri.go:89] found id: ""
	I1025 22:56:46.395734  726389 logs.go:282] 0 containers: []
	W1025 22:56:46.395746  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:46.395753  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:46.395829  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:46.430689  726389 cri.go:89] found id: ""
	I1025 22:56:46.430720  726389 logs.go:282] 0 containers: []
	W1025 22:56:46.430731  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:46.430740  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:46.430802  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:46.465497  726389 cri.go:89] found id: ""
	I1025 22:56:46.465532  726389 logs.go:282] 0 containers: []
	W1025 22:56:46.465544  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:46.465551  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:46.465618  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:46.498908  726389 cri.go:89] found id: ""
	I1025 22:56:46.498944  726389 logs.go:282] 0 containers: []
	W1025 22:56:46.498957  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:46.498969  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:46.498986  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:46.536776  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:46.536810  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:46.587017  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:46.587054  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:46.600803  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:46.600834  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:46.668982  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:46.669012  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:46.669030  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:49.256034  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:49.269736  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:49.269820  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:49.307659  726389 cri.go:89] found id: ""
	I1025 22:56:49.307693  726389 logs.go:282] 0 containers: []
	W1025 22:56:49.307706  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:49.307715  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:49.307785  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:49.345517  726389 cri.go:89] found id: ""
	I1025 22:56:49.345550  726389 logs.go:282] 0 containers: []
	W1025 22:56:49.345562  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:49.345571  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:49.345665  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:49.389856  726389 cri.go:89] found id: ""
	I1025 22:56:49.389905  726389 logs.go:282] 0 containers: []
	W1025 22:56:49.389919  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:49.389929  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:49.389995  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:49.427870  726389 cri.go:89] found id: ""
	I1025 22:56:49.427906  726389 logs.go:282] 0 containers: []
	W1025 22:56:49.427919  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:49.427927  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:49.427996  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:49.463842  726389 cri.go:89] found id: ""
	I1025 22:56:49.463874  726389 logs.go:282] 0 containers: []
	W1025 22:56:49.463886  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:49.463895  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:49.463960  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:49.503566  726389 cri.go:89] found id: ""
	I1025 22:56:49.503604  726389 logs.go:282] 0 containers: []
	W1025 22:56:49.503616  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:49.503625  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:49.503702  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:49.543009  726389 cri.go:89] found id: ""
	I1025 22:56:49.543040  726389 logs.go:282] 0 containers: []
	W1025 22:56:49.543053  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:49.543061  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:49.543127  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:49.582749  726389 cri.go:89] found id: ""
	I1025 22:56:49.582779  726389 logs.go:282] 0 containers: []
	W1025 22:56:49.582787  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:49.582797  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:49.582810  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:49.596169  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:49.596200  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:49.668974  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:49.669001  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:49.669018  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:49.746577  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:49.746611  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:49.789258  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:49.789294  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:52.344248  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:52.357770  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:52.357845  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:52.391340  726389 cri.go:89] found id: ""
	I1025 22:56:52.391371  726389 logs.go:282] 0 containers: []
	W1025 22:56:52.391400  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:52.391415  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:52.391483  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:52.426513  726389 cri.go:89] found id: ""
	I1025 22:56:52.426548  726389 logs.go:282] 0 containers: []
	W1025 22:56:52.426560  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:52.426569  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:52.426632  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:52.461136  726389 cri.go:89] found id: ""
	I1025 22:56:52.461176  726389 logs.go:282] 0 containers: []
	W1025 22:56:52.461188  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:52.461199  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:52.461274  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:52.498645  726389 cri.go:89] found id: ""
	I1025 22:56:52.498675  726389 logs.go:282] 0 containers: []
	W1025 22:56:52.498686  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:52.498695  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:52.498759  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:52.535393  726389 cri.go:89] found id: ""
	I1025 22:56:52.535429  726389 logs.go:282] 0 containers: []
	W1025 22:56:52.535441  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:52.535449  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:52.535519  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:52.571670  726389 cri.go:89] found id: ""
	I1025 22:56:52.571702  726389 logs.go:282] 0 containers: []
	W1025 22:56:52.571714  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:52.571722  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:52.571788  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:52.607016  726389 cri.go:89] found id: ""
	I1025 22:56:52.607049  726389 logs.go:282] 0 containers: []
	W1025 22:56:52.607061  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:52.607069  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:52.607131  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:52.644629  726389 cri.go:89] found id: ""
	I1025 22:56:52.644662  726389 logs.go:282] 0 containers: []
	W1025 22:56:52.644671  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:52.644680  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:52.644692  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:52.704166  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:52.704208  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:52.734022  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:52.734057  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:52.814926  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:52.814953  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:52.814969  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:52.894767  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:52.894808  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:55.441053  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:55.456430  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:55.456504  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:55.497439  726389 cri.go:89] found id: ""
	I1025 22:56:55.497476  726389 logs.go:282] 0 containers: []
	W1025 22:56:55.497489  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:55.497497  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:55.497566  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:55.537365  726389 cri.go:89] found id: ""
	I1025 22:56:55.537436  726389 logs.go:282] 0 containers: []
	W1025 22:56:55.537451  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:55.537460  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:55.537533  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:55.583014  726389 cri.go:89] found id: ""
	I1025 22:56:55.583036  726389 logs.go:282] 0 containers: []
	W1025 22:56:55.583045  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:55.583053  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:55.583108  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:55.622879  726389 cri.go:89] found id: ""
	I1025 22:56:55.622907  726389 logs.go:282] 0 containers: []
	W1025 22:56:55.622915  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:55.622921  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:55.622971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:55.662962  726389 cri.go:89] found id: ""
	I1025 22:56:55.663001  726389 logs.go:282] 0 containers: []
	W1025 22:56:55.663015  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:55.663025  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:55.663094  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:55.704810  726389 cri.go:89] found id: ""
	I1025 22:56:55.704839  726389 logs.go:282] 0 containers: []
	W1025 22:56:55.704848  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:55.704855  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:55.704904  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:55.748813  726389 cri.go:89] found id: ""
	I1025 22:56:55.748848  726389 logs.go:282] 0 containers: []
	W1025 22:56:55.748860  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:55.748868  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:55.748934  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:55.788167  726389 cri.go:89] found id: ""
	I1025 22:56:55.788202  726389 logs.go:282] 0 containers: []
	W1025 22:56:55.788215  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:55.788227  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:55.788245  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:55.876380  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:55.876404  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:55.876419  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:55.968568  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:55.968604  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:56:56.010609  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:56.010648  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:56.063691  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:56.063732  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:58.579570  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:56:58.593542  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:56:58.593624  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:56:58.631311  726389 cri.go:89] found id: ""
	I1025 22:56:58.631339  726389 logs.go:282] 0 containers: []
	W1025 22:56:58.631350  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:56:58.631358  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:56:58.631429  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:56:58.669785  726389 cri.go:89] found id: ""
	I1025 22:56:58.669817  726389 logs.go:282] 0 containers: []
	W1025 22:56:58.669826  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:56:58.669840  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:56:58.669894  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:56:58.705636  726389 cri.go:89] found id: ""
	I1025 22:56:58.705671  726389 logs.go:282] 0 containers: []
	W1025 22:56:58.705684  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:56:58.705693  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:56:58.705761  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:56:58.745643  726389 cri.go:89] found id: ""
	I1025 22:56:58.745676  726389 logs.go:282] 0 containers: []
	W1025 22:56:58.745687  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:56:58.745696  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:56:58.745764  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:56:58.783229  726389 cri.go:89] found id: ""
	I1025 22:56:58.783261  726389 logs.go:282] 0 containers: []
	W1025 22:56:58.783270  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:56:58.783276  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:56:58.783329  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:56:58.817849  726389 cri.go:89] found id: ""
	I1025 22:56:58.817888  726389 logs.go:282] 0 containers: []
	W1025 22:56:58.817900  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:56:58.817908  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:56:58.817971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:56:58.855019  726389 cri.go:89] found id: ""
	I1025 22:56:58.855057  726389 logs.go:282] 0 containers: []
	W1025 22:56:58.855070  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:56:58.855078  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:56:58.855136  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:56:58.887959  726389 cri.go:89] found id: ""
	I1025 22:56:58.887991  726389 logs.go:282] 0 containers: []
	W1025 22:56:58.888000  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:56:58.888011  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:56:58.888023  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:56:58.938747  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:56:58.938787  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:56:58.952766  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:56:58.952801  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:56:59.024689  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:56:59.024719  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:56:59.024736  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:56:59.099805  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:56:59.099852  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:01.639681  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:01.653065  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:01.653142  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:01.684047  726389 cri.go:89] found id: ""
	I1025 22:57:01.684078  726389 logs.go:282] 0 containers: []
	W1025 22:57:01.684090  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:01.684098  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:01.684160  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:01.717555  726389 cri.go:89] found id: ""
	I1025 22:57:01.717590  726389 logs.go:282] 0 containers: []
	W1025 22:57:01.717603  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:01.717611  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:01.717706  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:01.749502  726389 cri.go:89] found id: ""
	I1025 22:57:01.749532  726389 logs.go:282] 0 containers: []
	W1025 22:57:01.749541  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:01.749551  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:01.749607  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:01.781841  726389 cri.go:89] found id: ""
	I1025 22:57:01.781869  726389 logs.go:282] 0 containers: []
	W1025 22:57:01.781878  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:01.781885  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:01.781958  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:01.818407  726389 cri.go:89] found id: ""
	I1025 22:57:01.818438  726389 logs.go:282] 0 containers: []
	W1025 22:57:01.818449  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:01.818455  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:01.818514  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:01.852025  726389 cri.go:89] found id: ""
	I1025 22:57:01.852058  726389 logs.go:282] 0 containers: []
	W1025 22:57:01.852070  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:01.852078  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:01.852148  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:01.886175  726389 cri.go:89] found id: ""
	I1025 22:57:01.886204  726389 logs.go:282] 0 containers: []
	W1025 22:57:01.886215  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:01.886223  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:01.886289  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:01.919084  726389 cri.go:89] found id: ""
	I1025 22:57:01.919112  726389 logs.go:282] 0 containers: []
	W1025 22:57:01.919123  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:01.919135  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:01.919149  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:01.966848  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:01.966880  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:01.979706  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:01.979736  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:02.046660  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:02.046691  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:02.046707  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:02.126471  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:02.126517  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:04.664834  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:04.677759  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:04.677820  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:04.710557  726389 cri.go:89] found id: ""
	I1025 22:57:04.710585  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.710594  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:04.710601  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:04.710655  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:04.747197  726389 cri.go:89] found id: ""
	I1025 22:57:04.747225  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.747234  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:04.747240  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:04.747288  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:04.787986  726389 cri.go:89] found id: ""
	I1025 22:57:04.788018  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.788027  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:04.788034  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:04.788091  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:04.819796  726389 cri.go:89] found id: ""
	I1025 22:57:04.819824  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.819833  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:04.819839  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:04.819887  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:04.856885  726389 cri.go:89] found id: ""
	I1025 22:57:04.856925  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.856938  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:04.856946  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:04.857021  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:04.901723  726389 cri.go:89] found id: ""
	I1025 22:57:04.901759  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.901770  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:04.901779  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:04.901846  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:04.943775  726389 cri.go:89] found id: ""
	I1025 22:57:04.943810  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.943821  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:04.943830  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:04.943893  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:04.985957  726389 cri.go:89] found id: ""
	I1025 22:57:04.985982  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.985991  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:04.986000  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:04.986012  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:05.061490  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:05.061529  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:05.103028  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:05.103059  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:05.152607  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:05.152644  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:05.167577  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:05.167624  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:05.246428  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:07.747514  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:07.764567  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:07.764653  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:07.804356  726389 cri.go:89] found id: ""
	I1025 22:57:07.804453  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.804479  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:07.804498  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:07.804594  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:07.852155  726389 cri.go:89] found id: ""
	I1025 22:57:07.852190  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.852201  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:07.852210  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:07.852287  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:07.906149  726389 cri.go:89] found id: ""
	I1025 22:57:07.906195  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.906209  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:07.906237  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:07.906321  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:07.946134  726389 cri.go:89] found id: ""
	I1025 22:57:07.946165  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.946177  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:07.946189  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:07.946257  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:07.994191  726389 cri.go:89] found id: ""
	I1025 22:57:07.994225  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.994243  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:07.994252  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:07.994324  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:08.039254  726389 cri.go:89] found id: ""
	I1025 22:57:08.039284  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.039296  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:08.039303  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:08.039370  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:08.083985  726389 cri.go:89] found id: ""
	I1025 22:57:08.084016  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.084027  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:08.084034  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:08.084100  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:08.121051  726389 cri.go:89] found id: ""
	I1025 22:57:08.121084  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.121096  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:08.121111  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:08.121128  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:08.210698  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:08.210743  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:08.251297  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:08.251326  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:08.309007  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:08.309049  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:08.323243  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:08.323281  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:08.395704  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:10.896885  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:10.912430  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:10.912544  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:10.949298  726389 cri.go:89] found id: ""
	I1025 22:57:10.949332  726389 logs.go:282] 0 containers: []
	W1025 22:57:10.949345  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:10.949356  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:10.949420  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:10.992906  726389 cri.go:89] found id: ""
	I1025 22:57:10.992941  726389 logs.go:282] 0 containers: []
	W1025 22:57:10.992963  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:10.992972  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:10.993037  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:11.035283  726389 cri.go:89] found id: ""
	I1025 22:57:11.035312  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.035321  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:11.035329  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:11.035391  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:11.075912  726389 cri.go:89] found id: ""
	I1025 22:57:11.075945  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.075957  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:11.075966  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:11.076031  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:11.116675  726389 cri.go:89] found id: ""
	I1025 22:57:11.116709  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.116721  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:11.116727  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:11.116788  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:11.153210  726389 cri.go:89] found id: ""
	I1025 22:57:11.153244  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.153258  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:11.153267  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:11.153331  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:11.195233  726389 cri.go:89] found id: ""
	I1025 22:57:11.195266  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.195278  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:11.195285  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:11.195346  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:11.237164  726389 cri.go:89] found id: ""
	I1025 22:57:11.237195  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.237206  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:11.237219  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:11.237236  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:11.299994  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:11.300043  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:11.316006  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:11.316055  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:11.404343  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:11.404368  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:11.404384  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:11.496349  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:11.496391  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:14.050229  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:14.064529  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:14.064615  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:14.101831  726389 cri.go:89] found id: ""
	I1025 22:57:14.101863  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.101877  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:14.101886  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:14.101950  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:14.139876  726389 cri.go:89] found id: ""
	I1025 22:57:14.139906  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.139915  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:14.139921  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:14.139982  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:14.175405  726389 cri.go:89] found id: ""
	I1025 22:57:14.175442  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.175454  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:14.175463  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:14.175535  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:14.220337  726389 cri.go:89] found id: ""
	I1025 22:57:14.220372  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.220392  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:14.220400  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:14.220471  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:14.262358  726389 cri.go:89] found id: ""
	I1025 22:57:14.262384  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.262393  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:14.262399  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:14.262457  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:14.303586  726389 cri.go:89] found id: ""
	I1025 22:57:14.303621  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.303629  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:14.303636  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:14.303687  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:14.343365  726389 cri.go:89] found id: ""
	I1025 22:57:14.343399  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.343411  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:14.343421  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:14.343494  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:14.376842  726389 cri.go:89] found id: ""
	I1025 22:57:14.376879  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.376892  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:14.376905  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:14.376921  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:14.426780  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:14.426819  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:14.439976  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:14.440007  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:14.512226  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:14.512258  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:14.512276  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:14.588240  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:14.588284  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:17.132197  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:17.146596  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:17.146674  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:17.185560  726389 cri.go:89] found id: ""
	I1025 22:57:17.185593  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.185603  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:17.185610  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:17.185670  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:17.220864  726389 cri.go:89] found id: ""
	I1025 22:57:17.220897  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.220910  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:17.220919  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:17.221004  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:17.260844  726389 cri.go:89] found id: ""
	I1025 22:57:17.260872  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.260880  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:17.260887  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:17.260939  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:17.302800  726389 cri.go:89] found id: ""
	I1025 22:57:17.302833  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.302845  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:17.302853  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:17.302913  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:17.337851  726389 cri.go:89] found id: ""
	I1025 22:57:17.337881  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.337892  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:17.337901  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:17.337959  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:17.374697  726389 cri.go:89] found id: ""
	I1025 22:57:17.374739  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.374752  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:17.374760  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:17.374827  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:17.419883  726389 cri.go:89] found id: ""
	I1025 22:57:17.419913  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.419923  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:17.419929  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:17.419981  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:17.453770  726389 cri.go:89] found id: ""
	I1025 22:57:17.453797  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.453809  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:17.453821  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:17.453835  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:17.467935  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:17.467971  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:17.546221  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:17.546251  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:17.546269  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:17.655338  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:17.655395  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:17.696499  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:17.696531  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:20.249946  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:20.267883  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:20.267964  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:20.317028  726389 cri.go:89] found id: ""
	I1025 22:57:20.317071  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.317083  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:20.317092  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:20.317159  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:20.362449  726389 cri.go:89] found id: ""
	I1025 22:57:20.362481  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.362491  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:20.362497  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:20.362548  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:20.398308  726389 cri.go:89] found id: ""
	I1025 22:57:20.398348  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.398369  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:20.398377  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:20.398450  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:20.446702  726389 cri.go:89] found id: ""
	I1025 22:57:20.446731  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.446740  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:20.446746  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:20.446798  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:20.489776  726389 cri.go:89] found id: ""
	I1025 22:57:20.489809  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.489826  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:20.489833  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:20.489899  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:20.535387  726389 cri.go:89] found id: ""
	I1025 22:57:20.535415  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.535426  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:20.535442  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:20.535507  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:20.573433  726389 cri.go:89] found id: ""
	I1025 22:57:20.573467  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.573479  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:20.573487  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:20.573554  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:20.613584  726389 cri.go:89] found id: ""
	I1025 22:57:20.613619  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.613631  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:20.613643  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:20.613664  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:20.675387  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:20.675426  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:20.691467  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:20.691513  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:20.813943  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:20.813975  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:20.813992  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:20.904974  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:20.905028  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:23.450429  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:23.464096  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:23.464169  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:23.500126  726389 cri.go:89] found id: ""
	I1025 22:57:23.500152  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.500161  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:23.500167  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:23.500220  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:23.534564  726389 cri.go:89] found id: ""
	I1025 22:57:23.534597  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.534608  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:23.534615  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:23.534666  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:23.577493  726389 cri.go:89] found id: ""
	I1025 22:57:23.577529  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.577541  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:23.577551  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:23.577679  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:23.616432  726389 cri.go:89] found id: ""
	I1025 22:57:23.616463  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.616474  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:23.616488  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:23.616553  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:23.655679  726389 cri.go:89] found id: ""
	I1025 22:57:23.655715  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.655727  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:23.655735  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:23.655804  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:23.695528  726389 cri.go:89] found id: ""
	I1025 22:57:23.695558  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.695570  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:23.695578  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:23.695642  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:23.734570  726389 cri.go:89] found id: ""
	I1025 22:57:23.734610  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.734622  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:23.734631  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:23.734703  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:23.774178  726389 cri.go:89] found id: ""
	I1025 22:57:23.774213  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.774225  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:23.774238  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:23.774254  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:23.857347  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:23.857389  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:23.896130  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:23.896167  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:23.948276  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:23.948320  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:23.961809  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:23.961840  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:24.053746  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:26.553979  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:26.567886  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:26.567964  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:26.603338  726389 cri.go:89] found id: ""
	I1025 22:57:26.603376  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.603389  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:26.603403  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:26.603475  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:26.637525  726389 cri.go:89] found id: ""
	I1025 22:57:26.637548  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.637556  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:26.637562  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:26.637609  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:26.672117  726389 cri.go:89] found id: ""
	I1025 22:57:26.672150  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.672159  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:26.672166  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:26.672230  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:26.705637  726389 cri.go:89] found id: ""
	I1025 22:57:26.705669  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.705681  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:26.705689  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:26.705762  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:26.759040  726389 cri.go:89] found id: ""
	I1025 22:57:26.759070  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.759084  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:26.759092  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:26.759161  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:26.811512  726389 cri.go:89] found id: ""
	I1025 22:57:26.811537  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.811547  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:26.811555  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:26.811641  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:26.851215  726389 cri.go:89] found id: ""
	I1025 22:57:26.851245  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.851256  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:26.851264  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:26.851330  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:26.884460  726389 cri.go:89] found id: ""
	I1025 22:57:26.884495  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.884508  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:26.884520  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:26.884535  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:26.960048  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:26.960092  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:26.998588  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:26.998620  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:27.061646  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:27.061692  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:27.078350  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:27.078385  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:27.150478  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:29.650805  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:29.664484  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:29.664563  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:29.706919  726389 cri.go:89] found id: ""
	I1025 22:57:29.706950  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.706961  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:29.706968  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:29.707032  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:29.748272  726389 cri.go:89] found id: ""
	I1025 22:57:29.748301  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.748313  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:29.748322  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:29.748383  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:29.783239  726389 cri.go:89] found id: ""
	I1025 22:57:29.783281  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.783303  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:29.783315  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:29.783381  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:29.828942  726389 cri.go:89] found id: ""
	I1025 22:57:29.829005  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.829021  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:29.829031  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:29.829112  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:29.874831  726389 cri.go:89] found id: ""
	I1025 22:57:29.874864  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.874876  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:29.874885  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:29.874950  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:29.920380  726389 cri.go:89] found id: ""
	I1025 22:57:29.920411  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.920422  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:29.920430  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:29.920495  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:29.964594  726389 cri.go:89] found id: ""
	I1025 22:57:29.964624  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.964636  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:29.964643  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:29.964713  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:30.000416  726389 cri.go:89] found id: ""
	I1025 22:57:30.000449  726389 logs.go:282] 0 containers: []
	W1025 22:57:30.000461  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:30.000475  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:30.000500  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:30.073028  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:30.073055  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:30.073072  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:30.158430  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:30.158481  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:30.212493  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:30.212530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:30.289552  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:30.289606  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:32.808776  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:32.822039  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:32.822111  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:32.857007  726389 cri.go:89] found id: ""
	I1025 22:57:32.857042  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.857054  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:32.857063  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:32.857122  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:32.902015  726389 cri.go:89] found id: ""
	I1025 22:57:32.902045  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.902057  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:32.902066  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:32.902146  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:32.962252  726389 cri.go:89] found id: ""
	I1025 22:57:32.962287  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.962299  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:32.962307  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:32.962381  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:33.010092  726389 cri.go:89] found id: ""
	I1025 22:57:33.010129  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.010140  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:33.010149  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:33.010219  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:33.057453  726389 cri.go:89] found id: ""
	I1025 22:57:33.057482  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.057492  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:33.057499  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:33.057618  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:33.096991  726389 cri.go:89] found id: ""
	I1025 22:57:33.097024  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.097035  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:33.097042  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:33.097092  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:33.130710  726389 cri.go:89] found id: ""
	I1025 22:57:33.130740  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.130751  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:33.130759  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:33.130820  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:33.169440  726389 cri.go:89] found id: ""
	I1025 22:57:33.169479  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.169491  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:33.169505  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:33.169520  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:33.249558  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:33.249586  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:33.249603  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:33.364568  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:33.364613  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:33.415233  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:33.415264  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:33.472943  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:33.473014  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:35.989111  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:36.002822  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:36.002901  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:36.042325  726389 cri.go:89] found id: ""
	I1025 22:57:36.042362  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.042373  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:36.042381  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:36.042446  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:36.083924  726389 cri.go:89] found id: ""
	I1025 22:57:36.083957  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.083968  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:36.083976  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:36.084047  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:36.117475  726389 cri.go:89] found id: ""
	I1025 22:57:36.117511  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.117523  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:36.117531  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:36.117592  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:36.151851  726389 cri.go:89] found id: ""
	I1025 22:57:36.151888  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.151901  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:36.151909  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:36.151975  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:36.188798  726389 cri.go:89] found id: ""
	I1025 22:57:36.188825  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.188837  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:36.188845  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:36.188905  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:36.222491  726389 cri.go:89] found id: ""
	I1025 22:57:36.222532  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.222544  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:36.222555  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:36.222621  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:36.257481  726389 cri.go:89] found id: ""
	I1025 22:57:36.257511  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.257520  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:36.257527  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:36.257580  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:36.291774  726389 cri.go:89] found id: ""
	I1025 22:57:36.291805  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.291817  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:36.291829  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:36.291845  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:36.341240  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:36.341288  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:36.355280  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:36.355312  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:36.420727  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:36.420756  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:36.420770  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:36.496896  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:36.496943  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:39.035530  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:39.053640  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:39.053721  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:39.095892  726389 cri.go:89] found id: ""
	I1025 22:57:39.095924  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.095936  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:39.095945  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:39.096010  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:39.135571  726389 cri.go:89] found id: ""
	I1025 22:57:39.135603  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.135614  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:39.135621  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:39.135680  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:39.174481  726389 cri.go:89] found id: ""
	I1025 22:57:39.174517  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.174530  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:39.174539  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:39.174597  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:39.214453  726389 cri.go:89] found id: ""
	I1025 22:57:39.214488  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.214505  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:39.214512  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:39.214565  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:39.251084  726389 cri.go:89] found id: ""
	I1025 22:57:39.251111  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.251119  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:39.251126  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:39.251186  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:39.292067  726389 cri.go:89] found id: ""
	I1025 22:57:39.292098  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.292108  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:39.292117  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:39.292183  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:39.331918  726389 cri.go:89] found id: ""
	I1025 22:57:39.331953  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.331964  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:39.331972  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:39.332032  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:39.366300  726389 cri.go:89] found id: ""
	I1025 22:57:39.366334  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.366346  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:39.366358  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:39.366373  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:39.451297  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:39.451344  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:39.492655  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:39.492695  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:39.551959  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:39.552004  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:39.565900  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:39.565934  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:39.637894  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:42.138727  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:42.152525  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:42.152616  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:42.190900  726389 cri.go:89] found id: ""
	I1025 22:57:42.190935  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.190947  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:42.190955  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:42.191043  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:42.237668  726389 cri.go:89] found id: ""
	I1025 22:57:42.237698  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.237711  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:42.237720  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:42.237781  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:42.289049  726389 cri.go:89] found id: ""
	I1025 22:57:42.289077  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.289087  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:42.289096  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:42.289155  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:42.334276  726389 cri.go:89] found id: ""
	I1025 22:57:42.334306  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.334318  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:42.334327  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:42.334385  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:42.379295  726389 cri.go:89] found id: ""
	I1025 22:57:42.379317  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.379325  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:42.379331  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:42.379375  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:42.416452  726389 cri.go:89] found id: ""
	I1025 22:57:42.416484  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.416496  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:42.416504  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:42.416563  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:42.455290  726389 cri.go:89] found id: ""
	I1025 22:57:42.455324  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.455336  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:42.455352  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:42.455421  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:42.493367  726389 cri.go:89] found id: ""
	I1025 22:57:42.493396  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.493413  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:42.493426  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:42.493444  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:42.511673  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:42.511724  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:42.589951  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:42.589976  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:42.589994  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:42.697460  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:42.697498  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:42.757645  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:42.757672  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:45.312071  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:45.325800  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:45.325881  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:45.370543  726389 cri.go:89] found id: ""
	I1025 22:57:45.370572  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.370582  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:45.370590  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:45.370659  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:45.411970  726389 cri.go:89] found id: ""
	I1025 22:57:45.412009  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.412022  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:45.412032  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:45.412099  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:45.445037  726389 cri.go:89] found id: ""
	I1025 22:57:45.445073  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.445085  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:45.445094  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:45.445158  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:45.483563  726389 cri.go:89] found id: ""
	I1025 22:57:45.483595  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.483607  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:45.483615  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:45.483683  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:45.522944  726389 cri.go:89] found id: ""
	I1025 22:57:45.522978  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.522991  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:45.522999  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:45.523060  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:45.558055  726389 cri.go:89] found id: ""
	I1025 22:57:45.558086  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.558099  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:45.558107  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:45.558172  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:45.591533  726389 cri.go:89] found id: ""
	I1025 22:57:45.591564  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.591574  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:45.591581  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:45.591651  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:45.634951  726389 cri.go:89] found id: ""
	I1025 22:57:45.634985  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.634996  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:45.635009  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:45.635026  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:45.684807  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:45.684847  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:45.699038  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:45.699072  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:45.762687  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:45.762718  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:45.762736  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:45.851222  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:45.851265  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:48.389992  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:48.403774  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:48.403842  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:48.441883  726389 cri.go:89] found id: ""
	I1025 22:57:48.441908  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.441919  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:48.441929  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:48.441982  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:48.477527  726389 cri.go:89] found id: ""
	I1025 22:57:48.477550  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.477558  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:48.477564  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:48.477612  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:48.514457  726389 cri.go:89] found id: ""
	I1025 22:57:48.514489  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.514500  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:48.514510  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:48.514579  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:48.551264  726389 cri.go:89] found id: ""
	I1025 22:57:48.551296  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.551306  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:48.551312  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:48.551369  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:48.585426  726389 cri.go:89] found id: ""
	I1025 22:57:48.585454  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.585465  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:48.585473  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:48.585537  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:48.623734  726389 cri.go:89] found id: ""
	I1025 22:57:48.623772  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.623785  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:48.623794  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:48.623865  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:48.661170  726389 cri.go:89] found id: ""
	I1025 22:57:48.661207  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.661219  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:48.661227  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:48.661304  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:48.700776  726389 cri.go:89] found id: ""
	I1025 22:57:48.700803  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.700812  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:48.700825  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:48.700842  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:48.753294  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:48.753326  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:48.770412  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:48.770443  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:48.847535  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:48.847562  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:48.847577  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:48.920817  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:48.920862  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:51.460695  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:51.473870  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:51.473945  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:51.510350  726389 cri.go:89] found id: ""
	I1025 22:57:51.510383  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.510393  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:51.510406  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:51.510480  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:51.546705  726389 cri.go:89] found id: ""
	I1025 22:57:51.546742  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.546754  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:51.546762  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:51.546830  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:51.583728  726389 cri.go:89] found id: ""
	I1025 22:57:51.583759  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.583767  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:51.583774  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:51.583831  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:51.623229  726389 cri.go:89] found id: ""
	I1025 22:57:51.623260  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.623269  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:51.623275  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:51.623332  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:51.661673  726389 cri.go:89] found id: ""
	I1025 22:57:51.661700  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.661710  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:51.661716  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:51.661769  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:51.707516  726389 cri.go:89] found id: ""
	I1025 22:57:51.707551  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.707564  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:51.707572  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:51.707646  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:51.745242  726389 cri.go:89] found id: ""
	I1025 22:57:51.745277  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.745288  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:51.745295  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:51.745360  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:51.778136  726389 cri.go:89] found id: ""
	I1025 22:57:51.778165  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.778180  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:51.778193  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:51.778210  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:51.826323  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:51.826365  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:51.839635  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:51.839673  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:51.905218  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:51.905242  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:51.905260  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:51.979641  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:51.979680  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:54.519362  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:54.532482  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:54.532560  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:54.566193  726389 cri.go:89] found id: ""
	I1025 22:57:54.566221  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.566232  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:54.566240  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:54.566304  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:54.602139  726389 cri.go:89] found id: ""
	I1025 22:57:54.602166  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.602178  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:54.602187  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:54.602245  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:54.636484  726389 cri.go:89] found id: ""
	I1025 22:57:54.636519  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.636529  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:54.636545  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:54.636610  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:54.670617  726389 cri.go:89] found id: ""
	I1025 22:57:54.670649  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.670660  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:54.670666  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:54.670726  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:54.702360  726389 cri.go:89] found id: ""
	I1025 22:57:54.702400  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.702412  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:54.702420  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:54.702491  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:54.736101  726389 cri.go:89] found id: ""
	I1025 22:57:54.736140  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.736153  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:54.736161  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:54.736225  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:54.768706  726389 cri.go:89] found id: ""
	I1025 22:57:54.768744  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.768757  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:54.768766  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:54.768828  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:54.800919  726389 cri.go:89] found id: ""
	I1025 22:57:54.800965  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.800978  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:54.800989  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:54.801008  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:54.866242  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:54.866277  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:54.866294  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:54.942084  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:54.942127  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:54.979383  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:54.979422  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:55.029227  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:55.029269  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:57.543312  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:57.557090  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:57.557176  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:57.594813  726389 cri.go:89] found id: ""
	I1025 22:57:57.594847  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.594860  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:57.594868  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:57.594933  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:57.629736  726389 cri.go:89] found id: ""
	I1025 22:57:57.629769  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.629781  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:57.629790  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:57.629855  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:57.663895  726389 cri.go:89] found id: ""
	I1025 22:57:57.663927  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.663935  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:57.663940  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:57.663991  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:57.696122  726389 cri.go:89] found id: ""
	I1025 22:57:57.696153  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.696164  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:57.696171  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:57.696238  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:57.733740  726389 cri.go:89] found id: ""
	I1025 22:57:57.733769  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.733778  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:57.733785  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:57.733839  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:57.766855  726389 cri.go:89] found id: ""
	I1025 22:57:57.766886  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.766897  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:57.766905  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:57.766971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:57.804080  726389 cri.go:89] found id: ""
	I1025 22:57:57.804110  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.804118  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:57.804125  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:57.804178  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:57.837482  726389 cri.go:89] found id: ""
	I1025 22:57:57.837511  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.837520  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:57.837530  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:57.837542  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:57.889217  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:57.889265  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:57.902999  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:57.903039  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:57.968303  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:57.968327  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:57.968345  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:58.046929  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:58.046981  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:00.589410  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:00.602271  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:00.602344  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:00.635947  726389 cri.go:89] found id: ""
	I1025 22:58:00.635980  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.635989  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:00.635995  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:00.636057  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:00.668039  726389 cri.go:89] found id: ""
	I1025 22:58:00.668072  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.668083  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:00.668092  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:00.668163  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:00.700889  726389 cri.go:89] found id: ""
	I1025 22:58:00.700916  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.700925  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:00.700931  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:00.701026  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:00.734409  726389 cri.go:89] found id: ""
	I1025 22:58:00.734440  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.734452  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:00.734459  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:00.734527  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:00.770435  726389 cri.go:89] found id: ""
	I1025 22:58:00.770462  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.770469  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:00.770476  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:00.770535  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:00.803431  726389 cri.go:89] found id: ""
	I1025 22:58:00.803466  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.803477  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:00.803486  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:00.803552  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:00.837896  726389 cri.go:89] found id: ""
	I1025 22:58:00.837932  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.837943  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:00.837951  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:00.838025  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:00.875375  726389 cri.go:89] found id: ""
	I1025 22:58:00.875414  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.875425  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:00.875437  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:00.875453  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:00.925019  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:00.925057  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:00.938018  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:00.938050  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:01.008170  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:01.008199  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:01.008216  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:01.082487  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:01.082530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:03.623673  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:03.637286  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:03.637371  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:03.673836  726389 cri.go:89] found id: ""
	I1025 22:58:03.673884  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.673897  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:03.673906  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:03.673971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:03.706700  726389 cri.go:89] found id: ""
	I1025 22:58:03.706731  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.706742  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:03.706750  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:03.706818  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:03.738775  726389 cri.go:89] found id: ""
	I1025 22:58:03.738804  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.738815  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:03.738823  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:03.738889  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:03.770246  726389 cri.go:89] found id: ""
	I1025 22:58:03.770274  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.770284  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:03.770292  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:03.770366  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:03.811193  726389 cri.go:89] found id: ""
	I1025 22:58:03.811222  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.811231  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:03.811237  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:03.811290  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:03.842644  726389 cri.go:89] found id: ""
	I1025 22:58:03.842678  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.842686  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:03.842693  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:03.842750  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:03.874753  726389 cri.go:89] found id: ""
	I1025 22:58:03.874780  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.874788  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:03.874794  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:03.874845  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:03.907133  726389 cri.go:89] found id: ""
	I1025 22:58:03.907162  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.907173  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:03.907186  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:03.907202  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:03.957250  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:03.957287  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:03.970381  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:03.970408  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:04.033620  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:04.033647  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:04.033663  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:04.108254  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:04.108296  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:06.647214  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:06.660871  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:06.660942  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:06.694191  726389 cri.go:89] found id: ""
	I1025 22:58:06.694223  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.694232  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:06.694243  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:06.694295  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:06.728177  726389 cri.go:89] found id: ""
	I1025 22:58:06.728209  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.728222  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:06.728229  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:06.728300  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:06.761968  726389 cri.go:89] found id: ""
	I1025 22:58:06.762003  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.762015  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:06.762022  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:06.762089  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:06.794139  726389 cri.go:89] found id: ""
	I1025 22:58:06.794172  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.794186  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:06.794195  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:06.794261  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:06.830436  726389 cri.go:89] found id: ""
	I1025 22:58:06.830468  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.830481  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:06.830490  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:06.830557  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:06.865350  726389 cri.go:89] found id: ""
	I1025 22:58:06.865391  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.865405  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:06.865412  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:06.865468  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:06.899259  726389 cri.go:89] found id: ""
	I1025 22:58:06.899288  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.899298  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:06.899304  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:06.899354  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:06.930753  726389 cri.go:89] found id: ""
	I1025 22:58:06.930784  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.930793  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:06.930802  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:06.930813  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:06.943437  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:06.943464  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:07.012837  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:07.012862  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:07.012875  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:07.085555  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:07.085606  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:07.125421  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:07.125464  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:09.678235  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:09.691802  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:09.691884  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:09.730774  726389 cri.go:89] found id: ""
	I1025 22:58:09.730813  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.730826  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:09.730838  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:09.730893  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:09.768841  726389 cri.go:89] found id: ""
	I1025 22:58:09.768878  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.768894  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:09.768903  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:09.768984  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:09.802970  726389 cri.go:89] found id: ""
	I1025 22:58:09.803001  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.803013  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:09.803022  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:09.803093  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:09.835041  726389 cri.go:89] found id: ""
	I1025 22:58:09.835075  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.835087  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:09.835095  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:09.835148  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:09.868561  726389 cri.go:89] found id: ""
	I1025 22:58:09.868590  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.868601  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:09.868609  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:09.868689  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:09.901694  726389 cri.go:89] found id: ""
	I1025 22:58:09.901721  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.901730  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:09.901737  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:09.901793  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:09.936138  726389 cri.go:89] found id: ""
	I1025 22:58:09.936167  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.936178  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:09.936187  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:09.936250  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:09.969041  726389 cri.go:89] found id: ""
	I1025 22:58:09.969075  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.969087  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:09.969100  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:09.969115  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:10.036786  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:10.036816  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:10.036832  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:10.108946  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:10.109015  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:10.150241  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:10.150278  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:10.201815  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:10.201862  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:12.715673  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:12.729286  726389 kubeadm.go:597] duration metric: took 4m4.085037105s to restartPrimaryControlPlane
	W1025 22:58:12.729380  726389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 22:58:12.729407  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 22:58:13.183339  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:58:13.197871  726389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:58:13.207895  726389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:58:13.217907  726389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:58:13.217929  726389 kubeadm.go:157] found existing configuration files:
	
	I1025 22:58:13.217990  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:58:13.227351  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:58:13.227422  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:58:13.237158  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:58:13.246361  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:58:13.246431  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:58:13.256260  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:58:13.265821  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:58:13.265885  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:58:13.275535  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:58:13.284737  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:58:13.284804  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:58:13.294340  726389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 22:58:13.357520  726389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 22:58:13.357618  726389 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 22:58:13.492934  726389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:58:13.493109  726389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:58:13.493237  726389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:58:13.676988  726389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:58:13.679089  726389 out.go:235]   - Generating certificates and keys ...
	I1025 22:58:13.679191  726389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 22:58:13.679294  726389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 22:58:13.679410  726389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 22:58:13.679499  726389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 22:58:13.679591  726389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 22:58:13.679673  726389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 22:58:13.679773  726389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 22:58:13.679860  726389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 22:58:13.679958  726389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 22:58:13.680063  726389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 22:58:13.680117  726389 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 22:58:13.680195  726389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:58:13.792687  726389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:58:13.867665  726389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:58:14.014215  726389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:58:14.157457  726389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:58:14.181574  726389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:58:14.181693  726389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:58:14.181766  726389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 22:58:14.322320  726389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:58:14.324285  726389 out.go:235]   - Booting up control plane ...
	I1025 22:58:14.324402  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:58:14.328027  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:58:14.331034  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:58:14.332233  726389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:58:14.340260  726389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:58:54.338405  726389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:58:54.338592  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:58:54.338841  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:58:59.339365  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:58:59.339661  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:59:09.340395  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:59:09.340593  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:59:29.341629  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:59:29.341864  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:09.342793  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:09.343142  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:09.343171  726389 kubeadm.go:310] 
	I1025 23:00:09.343244  726389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 23:00:09.343309  726389 kubeadm.go:310] 		timed out waiting for the condition
	I1025 23:00:09.343320  726389 kubeadm.go:310] 
	I1025 23:00:09.343358  726389 kubeadm.go:310] 	This error is likely caused by:
	I1025 23:00:09.343390  726389 kubeadm.go:310] 		- The kubelet is not running
	I1025 23:00:09.343481  726389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 23:00:09.343489  726389 kubeadm.go:310] 
	I1025 23:00:09.343609  726389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 23:00:09.343655  726389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 23:00:09.343701  726389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 23:00:09.343711  726389 kubeadm.go:310] 
	I1025 23:00:09.343811  726389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 23:00:09.343886  726389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 23:00:09.343898  726389 kubeadm.go:310] 
	I1025 23:00:09.344020  726389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 23:00:09.344148  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 23:00:09.344258  726389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 23:00:09.344355  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 23:00:09.344365  726389 kubeadm.go:310] 
	I1025 23:00:09.345056  726389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 23:00:09.345170  726389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 23:00:09.345261  726389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1025 23:00:09.345502  726389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 23:00:09.345550  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 23:00:09.805116  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 23:00:09.820225  726389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 23:00:09.829679  726389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 23:00:09.829702  726389 kubeadm.go:157] found existing configuration files:
	
	I1025 23:00:09.829756  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 23:00:09.838792  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 23:00:09.838857  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 23:00:09.847823  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 23:00:09.856364  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 23:00:09.856422  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 23:00:09.865400  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 23:00:09.873766  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 23:00:09.873827  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 23:00:09.882969  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 23:00:09.891527  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 23:00:09.891606  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 23:00:09.900940  726389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 23:00:09.969506  726389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 23:00:09.969568  726389 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 23:00:10.115097  726389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 23:00:10.115224  726389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 23:00:10.115397  726389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 23:00:10.293601  726389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 23:00:10.296142  726389 out.go:235]   - Generating certificates and keys ...
	I1025 23:00:10.296255  726389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 23:00:10.296361  726389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 23:00:10.296502  726389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 23:00:10.296583  726389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 23:00:10.296676  726389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 23:00:10.296748  726389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 23:00:10.296840  726389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 23:00:10.296949  726389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 23:00:10.297071  726389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 23:00:10.297182  726389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 23:00:10.297236  726389 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 23:00:10.297334  726389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 23:00:10.411124  726389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 23:00:10.530014  726389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 23:00:10.624647  726389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 23:00:10.777858  726389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 23:00:10.797014  726389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 23:00:10.798078  726389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 23:00:10.798168  726389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 23:00:10.940610  726389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 23:00:10.942427  726389 out.go:235]   - Booting up control plane ...
	I1025 23:00:10.942572  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 23:00:10.959667  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 23:00:10.959757  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 23:00:10.959910  726389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 23:00:10.963884  726389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 23:00:50.966097  726389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 23:00:50.966211  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:50.966448  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:55.966794  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:55.967051  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:01:05.967421  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:01:05.967674  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:01:25.968507  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:01:25.968765  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:02:05.969405  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:02:05.969627  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:02:05.969639  726389 kubeadm.go:310] 
	I1025 23:02:05.969676  726389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 23:02:05.969777  726389 kubeadm.go:310] 		timed out waiting for the condition
	I1025 23:02:05.969821  726389 kubeadm.go:310] 
	I1025 23:02:05.969885  726389 kubeadm.go:310] 	This error is likely caused by:
	I1025 23:02:05.969935  726389 kubeadm.go:310] 		- The kubelet is not running
	I1025 23:02:05.970078  726389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 23:02:05.970092  726389 kubeadm.go:310] 
	I1025 23:02:05.970248  726389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 23:02:05.970290  726389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 23:02:05.970375  726389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 23:02:05.970388  726389 kubeadm.go:310] 
	I1025 23:02:05.970517  726389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 23:02:05.970595  726389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 23:02:05.970602  726389 kubeadm.go:310] 
	I1025 23:02:05.970729  726389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 23:02:05.970840  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 23:02:05.970914  726389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 23:02:05.971019  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 23:02:05.971031  726389 kubeadm.go:310] 
	I1025 23:02:05.971808  726389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 23:02:05.971923  726389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 23:02:05.972087  726389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1025 23:02:05.972124  726389 kubeadm.go:394] duration metric: took 7m57.377970738s to StartCluster
	I1025 23:02:05.972182  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 23:02:05.972244  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 23:02:06.012800  726389 cri.go:89] found id: ""
	I1025 23:02:06.012837  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.012852  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 23:02:06.012860  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 23:02:06.012925  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 23:02:06.051712  726389 cri.go:89] found id: ""
	I1025 23:02:06.051748  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.051761  726389 logs.go:284] No container was found matching "etcd"
	I1025 23:02:06.051769  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 23:02:06.051834  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 23:02:06.084904  726389 cri.go:89] found id: ""
	I1025 23:02:06.084939  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.084950  726389 logs.go:284] No container was found matching "coredns"
	I1025 23:02:06.084973  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 23:02:06.085056  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 23:02:06.120083  726389 cri.go:89] found id: ""
	I1025 23:02:06.120121  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.120133  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 23:02:06.120140  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 23:02:06.120197  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 23:02:06.154172  726389 cri.go:89] found id: ""
	I1025 23:02:06.154197  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.154205  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 23:02:06.154211  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 23:02:06.154261  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 23:02:06.187085  726389 cri.go:89] found id: ""
	I1025 23:02:06.187130  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.187143  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 23:02:06.187152  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 23:02:06.187220  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 23:02:06.220391  726389 cri.go:89] found id: ""
	I1025 23:02:06.220421  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.220430  726389 logs.go:284] No container was found matching "kindnet"
	I1025 23:02:06.220437  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 23:02:06.220503  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 23:02:06.254240  726389 cri.go:89] found id: ""
	I1025 23:02:06.254274  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.254286  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 23:02:06.254301  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 23:02:06.254340  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 23:02:06.301861  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 23:02:06.301907  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 23:02:06.315888  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 23:02:06.315919  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 23:02:06.386034  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 23:02:06.386073  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 23:02:06.386091  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 23:02:06.487167  726389 logs.go:123] Gathering logs for container status ...
	I1025 23:02:06.487216  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 23:02:06.539615  726389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 23:02:06.539690  726389 out.go:270] * 
	* 
	W1025 23:02:06.539895  726389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 23:02:06.539922  726389 out.go:270] * 
	* 
	W1025 23:02:06.540790  726389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 23:02:06.545196  726389 out.go:201] 
	W1025 23:02:06.546506  726389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 23:02:06.546544  726389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 23:02:06.546564  726389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 23:02:06.548055  726389 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-005932 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 2 (230.57715ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-005932 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-601894 image list                          | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| delete  | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| start   | -p newest-cni-357495 --memory=2200 --alsologtostderr   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-657458 image list                           | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| delete  | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| addons  | enable metrics-server -p newest-cni-357495             | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:57 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-357495                  | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-357495 --memory=2200 --alsologtostderr   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-166447                           | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	| image   | newest-cni-357495 image list                           | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	| delete  | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 22:57:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 22:57:09.006096  728361 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:57:09.006201  728361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:57:09.006209  728361 out.go:358] Setting ErrFile to fd 2...
	I1025 22:57:09.006214  728361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:57:09.006451  728361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:57:09.006988  728361 out.go:352] Setting JSON to false
	I1025 22:57:09.007986  728361 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20373,"bootTime":1729876656,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:57:09.008093  728361 start.go:139] virtualization: kvm guest
	I1025 22:57:09.010465  728361 out.go:177] * [newest-cni-357495] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:57:09.011802  728361 notify.go:220] Checking for updates...
	I1025 22:57:09.011839  728361 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 22:57:09.013146  728361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:57:09.014475  728361 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:09.015727  728361 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:57:09.016972  728361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:57:09.018210  728361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:57:09.019736  728361 config.go:182] Loaded profile config "newest-cni-357495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:09.020150  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:09.020224  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:09.035482  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1025 22:57:09.035920  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:09.036595  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:09.036617  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:09.037009  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:09.037247  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:09.037593  728361 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 22:57:09.037912  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:09.037954  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:09.053072  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33229
	I1025 22:57:09.053595  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:09.054218  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:09.054244  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:09.054588  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:09.054779  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:09.090073  728361 out.go:177] * Using the kvm2 driver based on existing profile
	I1025 22:57:09.091244  728361 start.go:297] selected driver: kvm2
	I1025 22:57:09.091260  728361 start.go:901] validating driver "kvm2" against &{Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:57:09.091400  728361 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:57:09.092078  728361 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:57:09.092162  728361 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 22:57:09.107070  728361 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 22:57:09.107505  728361 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 22:57:09.107537  728361 cni.go:84] Creating CNI manager for ""
	I1025 22:57:09.107588  728361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:09.107626  728361 start.go:340] cluster config:
	{Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-357495 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:57:09.107743  728361 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:57:09.109586  728361 out.go:177] * Starting "newest-cni-357495" primary control-plane node in "newest-cni-357495" cluster
	I1025 22:57:09.110853  728361 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 22:57:09.110886  728361 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1025 22:57:09.110896  728361 cache.go:56] Caching tarball of preloaded images
	I1025 22:57:09.111001  728361 preload.go:172] Found /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 22:57:09.111015  728361 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1025 22:57:09.111159  728361 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/config.json ...
	I1025 22:57:09.111340  728361 start.go:360] acquireMachinesLock for newest-cni-357495: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 22:57:09.111385  728361 start.go:364] duration metric: took 26.544µs to acquireMachinesLock for "newest-cni-357495"
	I1025 22:57:09.111405  728361 start.go:96] Skipping create...Using existing machine configuration
	I1025 22:57:09.111420  728361 fix.go:54] fixHost starting: 
	I1025 22:57:09.111679  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:09.111715  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:09.126695  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I1025 22:57:09.127148  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:09.127662  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:09.127683  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:09.128015  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:09.128203  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:09.128345  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:09.129983  728361 fix.go:112] recreateIfNeeded on newest-cni-357495: state=Stopped err=<nil>
	I1025 22:57:09.130022  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	W1025 22:57:09.130181  728361 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 22:57:09.131768  728361 out.go:177] * Restarting existing kvm2 VM for "newest-cni-357495" ...
	I1025 22:57:04.664834  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:04.677759  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:04.677820  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:04.710557  726389 cri.go:89] found id: ""
	I1025 22:57:04.710585  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.710594  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:04.710601  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:04.710655  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:04.747197  726389 cri.go:89] found id: ""
	I1025 22:57:04.747225  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.747234  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:04.747240  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:04.747288  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:04.787986  726389 cri.go:89] found id: ""
	I1025 22:57:04.788018  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.788027  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:04.788034  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:04.788091  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:04.819796  726389 cri.go:89] found id: ""
	I1025 22:57:04.819824  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.819833  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:04.819839  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:04.819887  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:04.856885  726389 cri.go:89] found id: ""
	I1025 22:57:04.856925  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.856938  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:04.856946  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:04.857021  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:04.901723  726389 cri.go:89] found id: ""
	I1025 22:57:04.901759  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.901770  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:04.901779  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:04.901846  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:04.943775  726389 cri.go:89] found id: ""
	I1025 22:57:04.943810  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.943821  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:04.943830  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:04.943893  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:04.985957  726389 cri.go:89] found id: ""
	I1025 22:57:04.985982  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.985991  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:04.986000  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:04.986012  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:05.061490  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:05.061529  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:05.103028  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:05.103059  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:05.152607  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:05.152644  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:05.167577  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:05.167624  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:05.246428  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:07.747514  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:07.764567  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:07.764653  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:07.804356  726389 cri.go:89] found id: ""
	I1025 22:57:07.804453  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.804479  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:07.804498  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:07.804594  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:07.852155  726389 cri.go:89] found id: ""
	I1025 22:57:07.852190  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.852201  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:07.852210  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:07.852287  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:07.906149  726389 cri.go:89] found id: ""
	I1025 22:57:07.906195  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.906209  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:07.906237  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:07.906321  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:07.946134  726389 cri.go:89] found id: ""
	I1025 22:57:07.946165  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.946177  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:07.946189  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:07.946257  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:07.994191  726389 cri.go:89] found id: ""
	I1025 22:57:07.994225  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.994243  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:07.994252  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:07.994324  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:08.039254  726389 cri.go:89] found id: ""
	I1025 22:57:08.039284  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.039296  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:08.039303  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:08.039370  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:08.083985  726389 cri.go:89] found id: ""
	I1025 22:57:08.084016  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.084027  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:08.084034  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:08.084100  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:08.121051  726389 cri.go:89] found id: ""
	I1025 22:57:08.121084  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.121096  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:08.121111  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:08.121128  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:08.210698  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:08.210743  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:08.251297  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:08.251326  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:08.309007  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:08.309049  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:08.323243  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:08.323281  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:08.395704  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:06.985771  725359 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001894992s
	I1025 22:57:06.985860  725359 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1025 22:57:11.989818  725359 kubeadm.go:310] [api-check] The API server is healthy after 5.002310213s
	I1025 22:57:12.000090  725359 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 22:57:12.029347  725359 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 22:57:12.065009  725359 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 22:57:12.065298  725359 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-166447 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 22:57:12.080390  725359 kubeadm.go:310] [bootstrap-token] Using token: gn84c5.mnibhpx86csafbn4
	I1025 22:57:12.081888  725359 out.go:235]   - Configuring RBAC rules ...
	I1025 22:57:12.082040  725359 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 22:57:12.094696  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 22:57:12.107652  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 22:57:12.112673  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 22:57:12.118594  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 22:57:12.131842  725359 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 22:57:12.397191  725359 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 22:57:12.821901  725359 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 22:57:13.393906  725359 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 22:57:13.394919  725359 kubeadm.go:310] 
	I1025 22:57:13.395007  725359 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 22:57:13.395019  725359 kubeadm.go:310] 
	I1025 22:57:13.395120  725359 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 22:57:13.395130  725359 kubeadm.go:310] 
	I1025 22:57:13.395163  725359 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 22:57:13.395252  725359 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 22:57:13.395324  725359 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 22:57:13.395333  725359 kubeadm.go:310] 
	I1025 22:57:13.395388  725359 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 22:57:13.395398  725359 kubeadm.go:310] 
	I1025 22:57:13.395460  725359 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 22:57:13.395470  725359 kubeadm.go:310] 
	I1025 22:57:13.395533  725359 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 22:57:13.395623  725359 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 22:57:13.395711  725359 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 22:57:13.395735  725359 kubeadm.go:310] 
	I1025 22:57:13.395856  725359 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 22:57:13.395977  725359 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 22:57:13.395991  725359 kubeadm.go:310] 
	I1025 22:57:13.396103  725359 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gn84c5.mnibhpx86csafbn4 \
	I1025 22:57:13.396257  725359 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a \
	I1025 22:57:13.396290  725359 kubeadm.go:310] 	--control-plane 
	I1025 22:57:13.396299  725359 kubeadm.go:310] 
	I1025 22:57:13.396418  725359 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 22:57:13.396428  725359 kubeadm.go:310] 
	I1025 22:57:13.396539  725359 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gn84c5.mnibhpx86csafbn4 \
	I1025 22:57:13.396691  725359 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a 
	I1025 22:57:13.397292  725359 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:57:13.397395  725359 cni.go:84] Creating CNI manager for ""
	I1025 22:57:13.397415  725359 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:13.399132  725359 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 22:57:09.132799  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Start
	I1025 22:57:09.133007  728361 main.go:141] libmachine: (newest-cni-357495) starting domain...
	I1025 22:57:09.133028  728361 main.go:141] libmachine: (newest-cni-357495) ensuring networks are active...
	I1025 22:57:09.133784  728361 main.go:141] libmachine: (newest-cni-357495) Ensuring network default is active
	I1025 22:57:09.134127  728361 main.go:141] libmachine: (newest-cni-357495) Ensuring network mk-newest-cni-357495 is active
	I1025 22:57:09.134535  728361 main.go:141] libmachine: (newest-cni-357495) getting domain XML...
	I1025 22:57:09.135259  728361 main.go:141] libmachine: (newest-cni-357495) creating domain...
	I1025 22:57:10.376675  728361 main.go:141] libmachine: (newest-cni-357495) waiting for IP...
	I1025 22:57:10.377919  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:10.378434  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:10.378529  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:10.378420  728395 retry.go:31] will retry after 234.774904ms: waiting for domain to come up
	I1025 22:57:10.615044  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:10.615713  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:10.615744  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:10.615692  728395 retry.go:31] will retry after 344.301388ms: waiting for domain to come up
	I1025 22:57:10.961349  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:10.961953  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:10.961987  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:10.961901  728395 retry.go:31] will retry after 439.472335ms: waiting for domain to come up
	I1025 22:57:11.403081  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:11.403801  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:11.403833  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:11.403754  728395 retry.go:31] will retry after 603.917881ms: waiting for domain to come up
	I1025 22:57:12.009100  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:12.009791  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:12.009816  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:12.009766  728395 retry.go:31] will retry after 654.012412ms: waiting for domain to come up
	I1025 22:57:12.665694  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:12.666298  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:12.666331  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:12.666254  728395 retry.go:31] will retry after 598.223644ms: waiting for domain to come up
	I1025 22:57:13.266161  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:13.266714  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:13.266746  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:13.266670  728395 retry.go:31] will retry after 807.374369ms: waiting for domain to come up
	I1025 22:57:10.896885  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:10.912430  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:10.912544  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:10.949298  726389 cri.go:89] found id: ""
	I1025 22:57:10.949332  726389 logs.go:282] 0 containers: []
	W1025 22:57:10.949345  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:10.949356  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:10.949420  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:10.992906  726389 cri.go:89] found id: ""
	I1025 22:57:10.992941  726389 logs.go:282] 0 containers: []
	W1025 22:57:10.992963  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:10.992972  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:10.993037  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:11.035283  726389 cri.go:89] found id: ""
	I1025 22:57:11.035312  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.035321  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:11.035329  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:11.035391  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:11.075912  726389 cri.go:89] found id: ""
	I1025 22:57:11.075945  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.075957  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:11.075966  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:11.076031  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:11.116675  726389 cri.go:89] found id: ""
	I1025 22:57:11.116709  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.116721  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:11.116727  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:11.116788  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:11.153210  726389 cri.go:89] found id: ""
	I1025 22:57:11.153244  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.153258  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:11.153267  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:11.153331  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:11.195233  726389 cri.go:89] found id: ""
	I1025 22:57:11.195266  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.195278  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:11.195285  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:11.195346  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:11.237164  726389 cri.go:89] found id: ""
	I1025 22:57:11.237195  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.237206  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:11.237219  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:11.237236  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:11.299994  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:11.300043  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:11.316006  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:11.316055  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:11.404343  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:11.404368  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:11.404384  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:11.496349  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:11.496391  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:14.050229  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:14.064529  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:14.064615  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:14.101831  726389 cri.go:89] found id: ""
	I1025 22:57:14.101863  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.101877  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:14.101886  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:14.101950  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:14.139876  726389 cri.go:89] found id: ""
	I1025 22:57:14.139906  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.139915  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:14.139921  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:14.139982  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:14.175405  726389 cri.go:89] found id: ""
	I1025 22:57:14.175442  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.175454  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:14.175463  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:14.175535  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:14.220337  726389 cri.go:89] found id: ""
	I1025 22:57:14.220372  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.220392  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:14.220400  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:14.220471  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:14.262358  726389 cri.go:89] found id: ""
	I1025 22:57:14.262384  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.262393  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:14.262399  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:14.262457  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:14.303586  726389 cri.go:89] found id: ""
	I1025 22:57:14.303621  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.303629  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:14.303636  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:14.303687  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:14.343365  726389 cri.go:89] found id: ""
	I1025 22:57:14.343399  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.343411  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:14.343421  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:14.343494  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:14.376842  726389 cri.go:89] found id: ""
	I1025 22:57:14.376879  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.376892  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:14.376905  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:14.376921  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:14.426780  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:14.426819  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:14.439976  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:14.440007  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:14.512226  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:14.512258  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:14.512276  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:14.588240  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:14.588284  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:13.400319  725359 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 22:57:13.410568  725359 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 22:57:13.431208  725359 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:57:13.431301  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:13.431322  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-166447 minikube.k8s.io/updated_at=2024_10_25T22_57_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=default-k8s-diff-port-166447 minikube.k8s.io/primary=true
	I1025 22:57:13.639716  725359 ops.go:34] apiserver oom_adj: -16
	I1025 22:57:13.639860  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:14.140884  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:14.639916  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:15.140843  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:15.640888  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:16.140691  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:16.640258  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:17.140873  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:17.640232  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:17.748262  725359 kubeadm.go:1113] duration metric: took 4.317031918s to wait for elevateKubeSystemPrivileges
	I1025 22:57:17.748310  725359 kubeadm.go:394] duration metric: took 5m32.487100054s to StartCluster
	I1025 22:57:17.748334  725359 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:17.748440  725359 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:17.749728  725359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:17.750023  725359 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.249 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:57:17.750214  725359 config.go:182] Loaded profile config "default-k8s-diff-port-166447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:17.750280  725359 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 22:57:17.750383  725359 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.750403  725359 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.750412  725359 addons.go:243] addon storage-provisioner should already be in state true
	I1025 22:57:17.750443  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.750455  725359 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.750479  725359 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-166447"
	I1025 22:57:17.750472  725359 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.750509  725359 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.750518  725359 addons.go:243] addon dashboard should already be in state true
	I1025 22:57:17.750548  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.750880  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.750914  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.750968  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.750996  725359 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.751003  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.751012  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.751019  725359 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.751028  725359 addons.go:243] addon metrics-server should already be in state true
	I1025 22:57:17.751043  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.751061  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.751477  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.751531  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.752307  725359 out.go:177] * Verifying Kubernetes components...
	I1025 22:57:17.754336  725359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:17.771639  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I1025 22:57:17.771674  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I1025 22:57:17.771640  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I1025 22:57:17.772091  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.772144  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.772781  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.772806  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.773002  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.773021  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.773179  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.773255  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.773747  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.773792  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.774065  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.774143  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.774156  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.774286  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.774620  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.775315  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.775393  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.777721  725359 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.777747  725359 addons.go:243] addon default-storageclass should already be in state true
	I1025 22:57:17.777782  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.778158  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.778209  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.779137  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1025 22:57:17.779690  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.780249  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.780270  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.780756  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.781301  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.781337  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.795859  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I1025 22:57:17.796354  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I1025 22:57:17.796527  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.796726  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.797032  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.797053  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.797488  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.797567  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.797584  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.797677  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.798041  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.798308  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.799791  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I1025 22:57:17.799971  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.800466  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.800716  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.801196  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.801221  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.801700  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.802363  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.802448  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.802478  725359 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:57:17.802546  725359 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1025 22:57:17.804194  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1025 22:57:17.804511  725359 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:17.804535  725359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:57:17.804557  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.804629  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 22:57:17.804640  725359 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 22:57:17.804657  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.804697  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.805172  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.805189  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.805541  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.805768  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.809358  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.809694  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.810510  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.810544  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.810708  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.810784  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.810929  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.811051  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.811140  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:17.811287  725359 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 22:57:17.811466  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.811495  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.811518  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.811635  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.814016  725359 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 22:57:14.076273  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:14.076902  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:14.076934  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:14.076868  728395 retry.go:31] will retry after 1.185306059s: waiting for domain to come up
	I1025 22:57:15.263741  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:15.264326  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:15.264366  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:15.264273  728395 retry.go:31] will retry after 1.322346565s: waiting for domain to come up
	I1025 22:57:16.588814  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:16.589321  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:16.589347  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:16.589282  728395 retry.go:31] will retry after 1.73855821s: waiting for domain to come up
	I1025 22:57:18.330419  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:18.331024  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:18.331054  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:18.330973  728395 retry.go:31] will retry after 2.069940103s: waiting for domain to come up
	I1025 22:57:17.132197  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:17.146596  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:17.146674  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:17.185560  726389 cri.go:89] found id: ""
	I1025 22:57:17.185593  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.185603  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:17.185610  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:17.185670  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:17.220864  726389 cri.go:89] found id: ""
	I1025 22:57:17.220897  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.220910  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:17.220919  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:17.221004  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:17.260844  726389 cri.go:89] found id: ""
	I1025 22:57:17.260872  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.260880  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:17.260887  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:17.260939  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:17.302800  726389 cri.go:89] found id: ""
	I1025 22:57:17.302833  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.302845  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:17.302853  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:17.302913  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:17.337851  726389 cri.go:89] found id: ""
	I1025 22:57:17.337881  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.337892  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:17.337901  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:17.337959  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:17.374697  726389 cri.go:89] found id: ""
	I1025 22:57:17.374739  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.374752  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:17.374760  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:17.374827  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:17.419883  726389 cri.go:89] found id: ""
	I1025 22:57:17.419913  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.419923  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:17.419929  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:17.419981  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:17.453770  726389 cri.go:89] found id: ""
	I1025 22:57:17.453797  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.453809  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:17.453821  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:17.453835  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:17.467935  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:17.467971  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:17.546221  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:17.546251  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:17.546269  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:17.655338  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:17.655395  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:17.696499  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:17.696531  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:17.815285  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 22:57:17.815304  725359 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 22:57:17.815325  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.821095  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.821105  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.821115  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.821128  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.821146  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.821336  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.821429  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:17.821740  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.821905  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:17.823391  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I1025 22:57:17.823756  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.824397  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.824420  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.824819  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.825001  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.826499  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.826709  725359 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:17.826724  725359 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:57:17.826741  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.829834  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.830223  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.830256  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.830391  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.830555  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.830712  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.830834  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:18.014991  725359 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:57:18.036760  725359 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-166447" to be "Ready" ...
	I1025 22:57:18.078787  725359 node_ready.go:49] node "default-k8s-diff-port-166447" has status "Ready":"True"
	I1025 22:57:18.078820  725359 node_ready.go:38] duration metric: took 42.016031ms for node "default-k8s-diff-port-166447" to be "Ready" ...
	I1025 22:57:18.078834  725359 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:57:18.085830  725359 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:18.122468  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 22:57:18.122502  725359 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 22:57:18.151830  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:18.164388  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:18.181181  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 22:57:18.181212  725359 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 22:57:18.239075  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 22:57:18.239113  725359 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 22:57:18.269994  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 22:57:18.270026  725359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 22:57:18.332398  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 22:57:18.332427  725359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 22:57:18.431935  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 22:57:18.431970  725359 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 22:57:18.435490  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 22:57:18.435518  725359 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 22:57:18.514890  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 22:57:18.514925  725359 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 22:57:18.543084  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:18.543128  725359 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 22:57:18.577174  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:18.620888  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 22:57:18.620921  725359 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 22:57:18.697204  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 22:57:18.697242  725359 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 22:57:18.810445  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:18.810484  725359 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 22:57:18.885504  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:19.260717  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.108837823s)
	I1025 22:57:19.260766  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.096340939s)
	I1025 22:57:19.260787  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.260802  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.260807  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.260863  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.261282  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.261318  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.261344  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.261350  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.261372  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.261385  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.261441  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.261466  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.261484  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.261526  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.261902  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.261916  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.262246  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.263229  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.263251  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.290328  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.290366  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.290838  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.290847  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.290864  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.132386  725359 pod_ready.go:103] pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace has status "Ready":"False"
	I1025 22:57:20.242738  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.665512298s)
	I1025 22:57:20.242808  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.242828  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.243142  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.243200  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:20.243217  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.243225  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.243238  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.243508  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.243530  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.243542  725359 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-166447"
	I1025 22:57:20.984026  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.098465183s)
	I1025 22:57:20.984079  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.984091  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.984421  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.984436  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.984444  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.984451  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.984739  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.984761  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.986558  725359 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-166447 addons enable metrics-server
	
	I1025 22:57:20.987567  725359 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1025 22:57:20.988902  725359 addons.go:510] duration metric: took 3.23862229s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1025 22:57:21.593090  725359 pod_ready.go:93] pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:21.593118  725359 pod_ready.go:82] duration metric: took 3.507254474s for pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:21.593131  725359 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gq5mv" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:21.597786  725359 pod_ready.go:93] pod "coredns-7c65d6cfc9-gq5mv" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:21.597816  725359 pod_ready.go:82] duration metric: took 4.674133ms for pod "coredns-7c65d6cfc9-gq5mv" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:21.597830  725359 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:20.402145  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:20.402661  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:20.402722  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:20.402656  728395 retry.go:31] will retry after 3.412502046s: waiting for domain to come up
	I1025 22:57:23.818716  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:23.819208  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:23.819237  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:23.819161  728395 retry.go:31] will retry after 4.418758048s: waiting for domain to come up
	I1025 22:57:20.249946  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:20.267883  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:20.267964  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:20.317028  726389 cri.go:89] found id: ""
	I1025 22:57:20.317071  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.317083  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:20.317092  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:20.317159  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:20.362449  726389 cri.go:89] found id: ""
	I1025 22:57:20.362481  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.362491  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:20.362497  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:20.362548  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:20.398308  726389 cri.go:89] found id: ""
	I1025 22:57:20.398348  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.398369  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:20.398377  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:20.398450  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:20.446702  726389 cri.go:89] found id: ""
	I1025 22:57:20.446731  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.446740  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:20.446746  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:20.446798  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:20.489776  726389 cri.go:89] found id: ""
	I1025 22:57:20.489809  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.489826  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:20.489833  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:20.489899  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:20.535387  726389 cri.go:89] found id: ""
	I1025 22:57:20.535415  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.535426  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:20.535442  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:20.535507  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:20.573433  726389 cri.go:89] found id: ""
	I1025 22:57:20.573467  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.573479  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:20.573487  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:20.573554  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:20.613584  726389 cri.go:89] found id: ""
	I1025 22:57:20.613619  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.613631  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:20.613643  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:20.613664  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:20.675387  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:20.675426  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:20.691467  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:20.691513  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:20.813943  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:20.813975  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:20.813992  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:20.904974  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:20.905028  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:23.450429  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:23.464096  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:23.464169  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:23.500126  726389 cri.go:89] found id: ""
	I1025 22:57:23.500152  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.500161  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:23.500167  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:23.500220  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:23.534564  726389 cri.go:89] found id: ""
	I1025 22:57:23.534597  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.534608  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:23.534615  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:23.534666  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:23.577493  726389 cri.go:89] found id: ""
	I1025 22:57:23.577529  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.577541  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:23.577551  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:23.577679  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:23.616432  726389 cri.go:89] found id: ""
	I1025 22:57:23.616463  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.616474  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:23.616488  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:23.616553  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:23.655679  726389 cri.go:89] found id: ""
	I1025 22:57:23.655715  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.655727  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:23.655735  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:23.655804  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:23.695528  726389 cri.go:89] found id: ""
	I1025 22:57:23.695558  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.695570  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:23.695578  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:23.695642  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:23.734570  726389 cri.go:89] found id: ""
	I1025 22:57:23.734610  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.734622  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:23.734631  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:23.734703  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:23.774178  726389 cri.go:89] found id: ""
	I1025 22:57:23.774213  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.774225  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:23.774238  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:23.774254  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:23.857347  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:23.857389  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:23.896130  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:23.896167  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:23.948276  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:23.948320  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:23.961809  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:23.961840  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:24.053746  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:23.604335  725359 pod_ready.go:103] pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"False"
	I1025 22:57:26.104577  725359 pod_ready.go:103] pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"False"
	I1025 22:57:26.613548  725359 pod_ready.go:93] pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.613571  725359 pod_ready.go:82] duration metric: took 5.015733422s for pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.613582  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.621883  725359 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.621908  725359 pod_ready.go:82] duration metric: took 8.319327ms for pod "kube-apiserver-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.621919  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.630956  725359 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.630981  725359 pod_ready.go:82] duration metric: took 9.055173ms for pod "kube-controller-manager-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.630994  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zqjjc" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.647393  725359 pod_ready.go:93] pod "kube-proxy-zqjjc" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.647428  725359 pod_ready.go:82] duration metric: took 16.426697ms for pod "kube-proxy-zqjjc" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.647440  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.658038  725359 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.658067  725359 pod_ready.go:82] duration metric: took 10.617453ms for pod "kube-scheduler-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.658077  725359 pod_ready.go:39] duration metric: took 8.57922838s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:57:26.658096  725359 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:57:26.658162  725359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:26.705852  725359 api_server.go:72] duration metric: took 8.955782657s to wait for apiserver process to appear ...
	I1025 22:57:26.705882  725359 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:57:26.705909  725359 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8444/healthz ...
	I1025 22:57:26.712359  725359 api_server.go:279] https://192.168.61.249:8444/healthz returned 200:
	ok
	I1025 22:57:26.713354  725359 api_server.go:141] control plane version: v1.31.1
	I1025 22:57:26.713378  725359 api_server.go:131] duration metric: took 7.487484ms to wait for apiserver health ...
	I1025 22:57:26.713397  725359 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:57:26.809108  725359 system_pods.go:59] 9 kube-system pods found
	I1025 22:57:26.809156  725359 system_pods.go:61] "coredns-7c65d6cfc9-6fssw" [06188cce-77d0-4fdd-9769-98498d4912c2] Running
	I1025 22:57:26.809165  725359 system_pods.go:61] "coredns-7c65d6cfc9-gq5mv" [bd29ba17-303d-47b6-a34b-2045fe2b965d] Running
	I1025 22:57:26.809177  725359 system_pods.go:61] "etcd-default-k8s-diff-port-166447" [e2c29643-05fb-4424-81ee-cf258c0b7e95] Running
	I1025 22:57:26.809184  725359 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-166447" [1ffa7ab6-e0ca-484b-bc1a-eba3741e9e72] Running
	I1025 22:57:26.809191  725359 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-166447" [7f2e2ca6-75ff-43a5-9787-8cb35d659c95] Running
	I1025 22:57:26.809196  725359 system_pods.go:61] "kube-proxy-zqjjc" [144928e5-1a70-4f28-8c34-c5f8c11bf2d7] Running
	I1025 22:57:26.809203  725359 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-166447" [e5852078-0ead-45bb-9197-e1d06d125a43] Running
	I1025 22:57:26.809216  725359 system_pods.go:61] "metrics-server-6867b74b74-nvxph" [e0d75616-8015-4d30-b209-f68ff502d6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:26.809226  725359 system_pods.go:61] "storage-provisioner" [855c29fa-deba-4342-90eb-19c66fd7905f] Running
	I1025 22:57:26.809243  725359 system_pods.go:74] duration metric: took 95.838638ms to wait for pod list to return data ...
	I1025 22:57:26.809259  725359 default_sa.go:34] waiting for default service account to be created ...
	I1025 22:57:27.003062  725359 default_sa.go:45] found service account: "default"
	I1025 22:57:27.003103  725359 default_sa.go:55] duration metric: took 193.830229ms for default service account to be created ...
	I1025 22:57:27.003120  725359 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 22:57:27.206396  725359 system_pods.go:86] 9 kube-system pods found
	I1025 22:57:27.206438  725359 system_pods.go:89] "coredns-7c65d6cfc9-6fssw" [06188cce-77d0-4fdd-9769-98498d4912c2] Running
	I1025 22:57:27.206446  725359 system_pods.go:89] "coredns-7c65d6cfc9-gq5mv" [bd29ba17-303d-47b6-a34b-2045fe2b965d] Running
	I1025 22:57:27.206452  725359 system_pods.go:89] "etcd-default-k8s-diff-port-166447" [e2c29643-05fb-4424-81ee-cf258c0b7e95] Running
	I1025 22:57:27.206457  725359 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-166447" [1ffa7ab6-e0ca-484b-bc1a-eba3741e9e72] Running
	I1025 22:57:27.206463  725359 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-166447" [7f2e2ca6-75ff-43a5-9787-8cb35d659c95] Running
	I1025 22:57:27.206468  725359 system_pods.go:89] "kube-proxy-zqjjc" [144928e5-1a70-4f28-8c34-c5f8c11bf2d7] Running
	I1025 22:57:27.206473  725359 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-166447" [e5852078-0ead-45bb-9197-e1d06d125a43] Running
	I1025 22:57:27.206485  725359 system_pods.go:89] "metrics-server-6867b74b74-nvxph" [e0d75616-8015-4d30-b209-f68ff502d6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:27.206491  725359 system_pods.go:89] "storage-provisioner" [855c29fa-deba-4342-90eb-19c66fd7905f] Running
	I1025 22:57:27.206500  725359 system_pods.go:126] duration metric: took 203.373296ms to wait for k8s-apps to be running ...
	I1025 22:57:27.206511  725359 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 22:57:27.206568  725359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:57:27.236359  725359 system_svc.go:56] duration metric: took 29.835602ms WaitForService to wait for kubelet
	I1025 22:57:27.236401  725359 kubeadm.go:582] duration metric: took 9.486336184s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 22:57:27.236428  725359 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:57:27.404633  725359 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:57:27.404660  725359 node_conditions.go:123] node cpu capacity is 2
	I1025 22:57:27.404674  725359 node_conditions.go:105] duration metric: took 168.23879ms to run NodePressure ...
	I1025 22:57:27.404686  725359 start.go:241] waiting for startup goroutines ...
	I1025 22:57:27.404693  725359 start.go:246] waiting for cluster config update ...
	I1025 22:57:27.404704  725359 start.go:255] writing updated cluster config ...
	I1025 22:57:27.404950  725359 ssh_runner.go:195] Run: rm -f paused
	I1025 22:57:27.471713  725359 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 22:57:27.473904  725359 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-166447" cluster and "default" namespace by default
	I1025 22:57:28.242024  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.242494  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has current primary IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.242523  728361 main.go:141] libmachine: (newest-cni-357495) found domain IP: 192.168.72.113
	I1025 22:57:28.242535  728361 main.go:141] libmachine: (newest-cni-357495) reserving static IP address...
	I1025 22:57:28.242970  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "newest-cni-357495", mac: "52:54:00:fb:c0:76", ip: "192.168.72.113"} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.243000  728361 main.go:141] libmachine: (newest-cni-357495) DBG | skip adding static IP to network mk-newest-cni-357495 - found existing host DHCP lease matching {name: "newest-cni-357495", mac: "52:54:00:fb:c0:76", ip: "192.168.72.113"}
	I1025 22:57:28.243013  728361 main.go:141] libmachine: (newest-cni-357495) reserved static IP address 192.168.72.113 for domain newest-cni-357495
	I1025 22:57:28.243028  728361 main.go:141] libmachine: (newest-cni-357495) waiting for SSH...
	I1025 22:57:28.243042  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Getting to WaitForSSH function...
	I1025 22:57:28.245300  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.245651  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.245680  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.245811  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Using SSH client type: external
	I1025 22:57:28.245835  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Using SSH private key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa (-rw-------)
	I1025 22:57:28.245865  728361 main.go:141] libmachine: (newest-cni-357495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 22:57:28.245876  728361 main.go:141] libmachine: (newest-cni-357495) DBG | About to run SSH command:
	I1025 22:57:28.245886  728361 main.go:141] libmachine: (newest-cni-357495) DBG | exit 0
	I1025 22:57:28.377143  728361 main.go:141] libmachine: (newest-cni-357495) DBG | SSH cmd err, output: <nil>: 
	I1025 22:57:28.377542  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetConfigRaw
	I1025 22:57:28.378182  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:28.380998  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.381388  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.381422  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.381661  728361 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/config.json ...
	I1025 22:57:28.382355  728361 machine.go:93] provisionDockerMachine start ...
	I1025 22:57:28.382383  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:28.382637  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.384883  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.385241  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.385266  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.385388  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.385550  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.385705  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.385873  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.386055  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:28.386295  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:28.386309  728361 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 22:57:28.489731  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 22:57:28.489766  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetMachineName
	I1025 22:57:28.490029  728361 buildroot.go:166] provisioning hostname "newest-cni-357495"
	I1025 22:57:28.490072  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetMachineName
	I1025 22:57:28.490223  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.493372  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.493804  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.493835  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.493978  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.494135  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.494278  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.494406  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.494585  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:28.494823  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:28.494850  728361 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-357495 && echo "newest-cni-357495" | sudo tee /etc/hostname
	I1025 22:57:28.612233  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-357495
	
	I1025 22:57:28.612271  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.615209  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.615542  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.615568  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.615802  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.616013  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.616211  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.616377  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.616605  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:28.616836  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:28.616860  728361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-357495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-357495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-357495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 22:57:28.731112  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:57:28.731149  728361 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19758-661979/.minikube CaCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19758-661979/.minikube}
	I1025 22:57:28.731175  728361 buildroot.go:174] setting up certificates
	I1025 22:57:28.731189  728361 provision.go:84] configureAuth start
	I1025 22:57:28.731202  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetMachineName
	I1025 22:57:28.731508  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:28.734722  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.735105  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.735159  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.735349  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.737700  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.738025  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.738059  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.738280  728361 provision.go:143] copyHostCerts
	I1025 22:57:28.738356  728361 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem, removing ...
	I1025 22:57:28.738370  728361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem
	I1025 22:57:28.738437  728361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem (1675 bytes)
	I1025 22:57:28.738544  728361 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem, removing ...
	I1025 22:57:28.738551  728361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem
	I1025 22:57:28.738576  728361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem (1082 bytes)
	I1025 22:57:28.738644  728361 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem, removing ...
	I1025 22:57:28.738652  728361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem
	I1025 22:57:28.738673  728361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem (1123 bytes)
	I1025 22:57:28.738739  728361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem org=jenkins.newest-cni-357495 san=[127.0.0.1 192.168.72.113 localhost minikube newest-cni-357495]
	I1025 22:57:28.833704  728361 provision.go:177] copyRemoteCerts
	I1025 22:57:28.833762  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 22:57:28.833797  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.836780  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.837177  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.837208  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.837372  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.837573  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.837734  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.837863  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:28.922411  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 22:57:28.948328  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 22:57:28.976524  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 22:57:29.005619  728361 provision.go:87] duration metric: took 274.411907ms to configureAuth
	I1025 22:57:29.005654  728361 buildroot.go:189] setting minikube options for container-runtime
	I1025 22:57:29.005887  728361 config.go:182] Loaded profile config "newest-cni-357495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:29.005985  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:26.553979  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:26.567886  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:26.567964  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:26.603338  726389 cri.go:89] found id: ""
	I1025 22:57:26.603376  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.603389  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:26.603403  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:26.603475  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:26.637525  726389 cri.go:89] found id: ""
	I1025 22:57:26.637548  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.637556  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:26.637562  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:26.637609  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:26.672117  726389 cri.go:89] found id: ""
	I1025 22:57:26.672150  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.672159  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:26.672166  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:26.672230  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:26.705637  726389 cri.go:89] found id: ""
	I1025 22:57:26.705669  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.705681  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:26.705689  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:26.705762  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:26.759040  726389 cri.go:89] found id: ""
	I1025 22:57:26.759070  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.759084  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:26.759092  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:26.759161  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:26.811512  726389 cri.go:89] found id: ""
	I1025 22:57:26.811537  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.811547  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:26.811555  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:26.811641  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:26.851215  726389 cri.go:89] found id: ""
	I1025 22:57:26.851245  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.851256  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:26.851264  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:26.851330  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:26.884460  726389 cri.go:89] found id: ""
	I1025 22:57:26.884495  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.884508  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:26.884520  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:26.884535  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:26.960048  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:26.960092  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:26.998588  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:26.998620  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:27.061646  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:27.061692  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:27.078350  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:27.078385  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:27.150478  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:29.009371  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.009852  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.009887  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.010056  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.010269  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.010451  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.010622  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.010818  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:29.010989  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:29.011004  728361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 22:57:29.235601  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 22:57:29.235655  728361 machine.go:96] duration metric: took 853.280404ms to provisionDockerMachine
	I1025 22:57:29.235672  728361 start.go:293] postStartSetup for "newest-cni-357495" (driver="kvm2")
	I1025 22:57:29.235694  728361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 22:57:29.235722  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.236076  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 22:57:29.236116  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.239049  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.239449  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.239482  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.239668  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.239889  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.240099  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.240319  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:29.327450  728361 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 22:57:29.331888  728361 info.go:137] Remote host: Buildroot 2023.02.9
	I1025 22:57:29.331921  728361 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/addons for local assets ...
	I1025 22:57:29.331987  728361 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/files for local assets ...
	I1025 22:57:29.332065  728361 filesync.go:149] local asset: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem -> 6691772.pem in /etc/ssl/certs
	I1025 22:57:29.332195  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 22:57:29.341892  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:57:29.367038  728361 start.go:296] duration metric: took 131.349254ms for postStartSetup
	I1025 22:57:29.367084  728361 fix.go:56] duration metric: took 20.2556649s for fixHost
	I1025 22:57:29.367106  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.369924  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.370255  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.370285  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.370425  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.370590  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.370745  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.370950  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.371124  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:29.371304  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:29.371313  728361 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 22:57:29.474861  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729897049.432427295
	
	I1025 22:57:29.474889  728361 fix.go:216] guest clock: 1729897049.432427295
	I1025 22:57:29.474899  728361 fix.go:229] Guest: 2024-10-25 22:57:29.432427295 +0000 UTC Remote: 2024-10-25 22:57:29.367088624 +0000 UTC m=+20.400142994 (delta=65.338671ms)
	I1025 22:57:29.474946  728361 fix.go:200] guest clock delta is within tolerance: 65.338671ms
	I1025 22:57:29.474960  728361 start.go:83] releasing machines lock for "newest-cni-357495", held for 20.363562153s
	I1025 22:57:29.474986  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.475248  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:29.478056  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.478406  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.478437  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.478628  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.479132  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.479319  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.479468  728361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 22:57:29.479506  728361 ssh_runner.go:195] Run: cat /version.json
	I1025 22:57:29.479527  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.479536  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.482531  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.482637  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.483074  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.483100  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.483131  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.483191  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.483471  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.483481  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.483652  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.483671  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.483931  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.483955  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.484103  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:29.484143  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:29.582367  728361 ssh_runner.go:195] Run: systemctl --version
	I1025 22:57:29.590693  728361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 22:57:29.745303  728361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 22:57:29.754423  728361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 22:57:29.754501  728361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 22:57:29.775617  728361 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 22:57:29.775648  728361 start.go:495] detecting cgroup driver to use...
	I1025 22:57:29.775747  728361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 22:57:29.799558  728361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 22:57:29.818705  728361 docker.go:217] disabling cri-docker service (if available) ...
	I1025 22:57:29.818806  728361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 22:57:29.833563  728361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 22:57:29.853630  728361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 22:57:29.983430  728361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 22:57:30.197267  728361 docker.go:233] disabling docker service ...
	I1025 22:57:30.197347  728361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 22:57:30.216012  728361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 22:57:30.230378  728361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 22:57:30.360555  728361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 22:57:30.484679  728361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 22:57:30.503208  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:57:30.523720  728361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1025 22:57:30.523795  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.535314  728361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 22:57:30.535383  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.546715  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.557826  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.569760  728361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 22:57:30.582722  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.593853  728361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.611448  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.622915  728361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 22:57:30.633073  728361 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 22:57:30.633147  728361 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 22:57:30.647230  728361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 22:57:30.657299  728361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:30.768765  728361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 22:57:30.854500  728361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 22:57:30.854590  728361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 22:57:30.859405  728361 start.go:563] Will wait 60s for crictl version
	I1025 22:57:30.859473  728361 ssh_runner.go:195] Run: which crictl
	I1025 22:57:30.863420  728361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 22:57:30.908862  728361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 22:57:30.908976  728361 ssh_runner.go:195] Run: crio --version
	I1025 22:57:30.939582  728361 ssh_runner.go:195] Run: crio --version
	I1025 22:57:30.978153  728361 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1025 22:57:30.979430  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:30.982243  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:30.982608  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:30.982641  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:30.982834  728361 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1025 22:57:30.988035  728361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:57:31.004301  728361 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 22:57:31.005441  728361 kubeadm.go:883] updating cluster {Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 22:57:31.005579  728361 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 22:57:31.005635  728361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:57:31.049853  728361 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1025 22:57:31.049928  728361 ssh_runner.go:195] Run: which lz4
	I1025 22:57:31.054174  728361 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 22:57:31.058473  728361 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 22:57:31.058505  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1025 22:57:32.497532  728361 crio.go:462] duration metric: took 1.44340372s to copy over tarball
	I1025 22:57:32.497637  728361 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 22:57:29.650805  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:29.664484  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:29.664563  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:29.706919  726389 cri.go:89] found id: ""
	I1025 22:57:29.706950  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.706961  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:29.706968  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:29.707032  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:29.748272  726389 cri.go:89] found id: ""
	I1025 22:57:29.748301  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.748313  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:29.748322  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:29.748383  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:29.783239  726389 cri.go:89] found id: ""
	I1025 22:57:29.783281  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.783303  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:29.783315  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:29.783381  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:29.828942  726389 cri.go:89] found id: ""
	I1025 22:57:29.829005  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.829021  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:29.829031  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:29.829112  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:29.874831  726389 cri.go:89] found id: ""
	I1025 22:57:29.874864  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.874876  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:29.874885  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:29.874950  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:29.920380  726389 cri.go:89] found id: ""
	I1025 22:57:29.920411  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.920422  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:29.920430  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:29.920495  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:29.964594  726389 cri.go:89] found id: ""
	I1025 22:57:29.964624  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.964636  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:29.964643  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:29.964713  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:30.000416  726389 cri.go:89] found id: ""
	I1025 22:57:30.000449  726389 logs.go:282] 0 containers: []
	W1025 22:57:30.000461  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:30.000475  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:30.000500  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:30.073028  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:30.073055  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:30.073072  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:30.158430  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:30.158481  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:30.212493  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:30.212530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:30.289552  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:30.289606  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:32.808776  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:32.822039  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:32.822111  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:32.857007  726389 cri.go:89] found id: ""
	I1025 22:57:32.857042  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.857054  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:32.857063  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:32.857122  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:32.902015  726389 cri.go:89] found id: ""
	I1025 22:57:32.902045  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.902057  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:32.902066  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:32.902146  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:32.962252  726389 cri.go:89] found id: ""
	I1025 22:57:32.962287  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.962299  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:32.962307  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:32.962381  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:33.010092  726389 cri.go:89] found id: ""
	I1025 22:57:33.010129  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.010140  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:33.010149  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:33.010219  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:33.057453  726389 cri.go:89] found id: ""
	I1025 22:57:33.057482  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.057492  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:33.057499  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:33.057618  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:33.096991  726389 cri.go:89] found id: ""
	I1025 22:57:33.097024  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.097035  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:33.097042  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:33.097092  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:33.130710  726389 cri.go:89] found id: ""
	I1025 22:57:33.130740  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.130751  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:33.130759  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:33.130820  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:33.169440  726389 cri.go:89] found id: ""
	I1025 22:57:33.169479  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.169491  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:33.169505  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:33.169520  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:33.249558  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:33.249586  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:33.249603  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:33.364568  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:33.364613  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:33.415233  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:33.415264  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:33.472943  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:33.473014  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:34.612317  728361 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11464276s)
	I1025 22:57:34.612352  728361 crio.go:469] duration metric: took 2.114771262s to extract the tarball
	I1025 22:57:34.612363  728361 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 22:57:34.651878  728361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:57:34.694439  728361 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 22:57:34.694463  728361 cache_images.go:84] Images are preloaded, skipping loading
	I1025 22:57:34.694472  728361 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.31.1 crio true true} ...
	I1025 22:57:34.694604  728361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-357495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 22:57:34.694677  728361 ssh_runner.go:195] Run: crio config
	I1025 22:57:34.748152  728361 cni.go:84] Creating CNI manager for ""
	I1025 22:57:34.748178  728361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:34.748189  728361 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1025 22:57:34.748215  728361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-357495 NodeName:newest-cni-357495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 22:57:34.748372  728361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-357495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.113"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 22:57:34.748437  728361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1025 22:57:34.760143  728361 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 22:57:34.760202  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 22:57:34.771582  728361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1025 22:57:34.787944  728361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 22:57:34.804113  728361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1025 22:57:34.820688  728361 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I1025 22:57:34.824565  728361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:57:34.837134  728361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:34.952711  728361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:57:34.970911  728361 certs.go:68] Setting up /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495 for IP: 192.168.72.113
	I1025 22:57:34.970937  728361 certs.go:194] generating shared ca certs ...
	I1025 22:57:34.970959  728361 certs.go:226] acquiring lock for ca certs: {Name:mkb3c0b02dcf77d22d8e250175d24731a46db4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:34.971160  728361 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key
	I1025 22:57:34.971239  728361 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key
	I1025 22:57:34.971254  728361 certs.go:256] generating profile certs ...
	I1025 22:57:34.971378  728361 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/client.key
	I1025 22:57:34.971475  728361 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/apiserver.key.03300bc5
	I1025 22:57:34.971536  728361 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/proxy-client.key
	I1025 22:57:34.971687  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem (1338 bytes)
	W1025 22:57:34.971735  728361 certs.go:480] ignoring /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177_empty.pem, impossibly tiny 0 bytes
	I1025 22:57:34.971748  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 22:57:34.971781  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem (1082 bytes)
	I1025 22:57:34.971814  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem (1123 bytes)
	I1025 22:57:34.971845  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem (1675 bytes)
	I1025 22:57:34.971898  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:57:34.972920  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 22:57:35.035802  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 22:57:35.066849  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 22:57:35.095746  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 22:57:35.122667  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 22:57:35.152086  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 22:57:35.178215  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 22:57:35.201152  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 22:57:35.225276  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 22:57:35.247950  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem --> /usr/share/ca-certificates/669177.pem (1338 bytes)
	I1025 22:57:35.273680  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /usr/share/ca-certificates/6691772.pem (1708 bytes)
	I1025 22:57:35.297790  728361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 22:57:35.314273  728361 ssh_runner.go:195] Run: openssl version
	I1025 22:57:35.319977  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 22:57:35.332531  728361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:57:35.337386  728361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:57:35.337435  728361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:57:35.343239  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 22:57:35.354526  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669177.pem && ln -fs /usr/share/ca-certificates/669177.pem /etc/ssl/certs/669177.pem"
	I1025 22:57:35.364927  728361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669177.pem
	I1025 22:57:35.369254  728361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:46 /usr/share/ca-certificates/669177.pem
	I1025 22:57:35.369307  728361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669177.pem
	I1025 22:57:35.375175  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669177.pem /etc/ssl/certs/51391683.0"
	I1025 22:57:35.386699  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6691772.pem && ln -fs /usr/share/ca-certificates/6691772.pem /etc/ssl/certs/6691772.pem"
	I1025 22:57:35.397181  728361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6691772.pem
	I1025 22:57:35.401747  728361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:46 /usr/share/ca-certificates/6691772.pem
	I1025 22:57:35.401797  728361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6691772.pem
	I1025 22:57:35.407254  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6691772.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 22:57:35.417716  728361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 22:57:35.422134  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 22:57:35.428825  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 22:57:35.435416  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 22:57:35.441327  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 22:57:35.446978  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 22:57:35.452887  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 22:57:35.458800  728361 kubeadm.go:392] StartCluster: {Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:57:35.458907  728361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 22:57:35.458975  728361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:57:35.508107  728361 cri.go:89] found id: ""
	I1025 22:57:35.508190  728361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 22:57:35.518730  728361 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 22:57:35.518756  728361 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 22:57:35.518812  728361 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 22:57:35.528709  728361 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:57:35.529470  728361 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-357495" does not appear in /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:35.529808  728361 kubeconfig.go:62] /home/jenkins/minikube-integration/19758-661979/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-357495" cluster setting kubeconfig missing "newest-cni-357495" context setting]
	I1025 22:57:35.530280  728361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:35.531821  728361 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 22:57:35.541383  728361 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.113
	I1025 22:57:35.541408  728361 kubeadm.go:1160] stopping kube-system containers ...
	I1025 22:57:35.541426  728361 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 22:57:35.541475  728361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:57:35.581588  728361 cri.go:89] found id: ""
	I1025 22:57:35.581670  728361 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 22:57:35.597329  728361 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:57:35.606992  728361 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:57:35.607032  728361 kubeadm.go:157] found existing configuration files:
	
	I1025 22:57:35.607078  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:57:35.616052  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:57:35.616100  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:57:35.625202  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:57:35.634016  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:57:35.634060  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:57:35.643656  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:57:35.654009  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:57:35.654059  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:57:35.664119  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:57:35.673468  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:57:35.673524  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:57:35.683499  728361 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:57:35.693207  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:35.800242  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:36.661671  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:36.883048  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:36.950556  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:37.060335  728361 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:57:37.060456  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:37.560722  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:38.061291  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:38.560646  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:35.989111  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:36.002822  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:36.002901  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:36.042325  726389 cri.go:89] found id: ""
	I1025 22:57:36.042362  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.042373  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:36.042381  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:36.042446  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:36.083924  726389 cri.go:89] found id: ""
	I1025 22:57:36.083957  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.083968  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:36.083976  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:36.084047  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:36.117475  726389 cri.go:89] found id: ""
	I1025 22:57:36.117511  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.117523  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:36.117531  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:36.117592  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:36.151851  726389 cri.go:89] found id: ""
	I1025 22:57:36.151888  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.151901  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:36.151909  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:36.151975  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:36.188798  726389 cri.go:89] found id: ""
	I1025 22:57:36.188825  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.188837  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:36.188845  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:36.188905  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:36.222491  726389 cri.go:89] found id: ""
	I1025 22:57:36.222532  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.222544  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:36.222555  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:36.222621  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:36.257481  726389 cri.go:89] found id: ""
	I1025 22:57:36.257511  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.257520  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:36.257527  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:36.257580  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:36.291774  726389 cri.go:89] found id: ""
	I1025 22:57:36.291805  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.291817  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:36.291829  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:36.291845  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:36.341240  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:36.341288  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:36.355280  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:36.355312  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:36.420727  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:36.420756  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:36.420770  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:36.496896  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:36.496943  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:39.035530  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:39.053640  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:39.053721  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:39.095892  726389 cri.go:89] found id: ""
	I1025 22:57:39.095924  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.095936  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:39.095945  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:39.096010  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:39.135571  726389 cri.go:89] found id: ""
	I1025 22:57:39.135603  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.135614  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:39.135621  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:39.135680  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:39.174481  726389 cri.go:89] found id: ""
	I1025 22:57:39.174517  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.174530  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:39.174539  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:39.174597  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:39.214453  726389 cri.go:89] found id: ""
	I1025 22:57:39.214488  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.214505  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:39.214512  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:39.214565  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:39.251084  726389 cri.go:89] found id: ""
	I1025 22:57:39.251111  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.251119  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:39.251126  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:39.251186  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:39.292067  726389 cri.go:89] found id: ""
	I1025 22:57:39.292098  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.292108  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:39.292117  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:39.292183  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:39.331918  726389 cri.go:89] found id: ""
	I1025 22:57:39.331953  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.331964  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:39.331972  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:39.332032  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:39.366300  726389 cri.go:89] found id: ""
	I1025 22:57:39.366334  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.366346  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:39.366358  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:39.366373  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:39.451297  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:39.451344  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:39.492655  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:39.492695  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:39.551959  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:39.552004  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:39.565900  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:39.565934  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:39.637894  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:39.061158  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:39.083761  728361 api_server.go:72] duration metric: took 2.023424888s to wait for apiserver process to appear ...
	I1025 22:57:39.083795  728361 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:57:39.083833  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:39.084432  728361 api_server.go:269] stopped: https://192.168.72.113:8443/healthz: Get "https://192.168.72.113:8443/healthz": dial tcp 192.168.72.113:8443: connect: connection refused
	I1025 22:57:39.584481  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:41.830058  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 22:57:41.830086  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 22:57:41.830102  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:41.851621  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 22:57:41.851664  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 22:57:42.083965  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:42.098809  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:57:42.098843  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:57:42.583931  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:42.595538  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:57:42.595610  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:57:43.084096  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:43.099317  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:57:43.099347  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:57:43.583916  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:43.588837  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I1025 22:57:43.595393  728361 api_server.go:141] control plane version: v1.31.1
	I1025 22:57:43.595419  728361 api_server.go:131] duration metric: took 4.511617345s to wait for apiserver health ...
	I1025 22:57:43.595430  728361 cni.go:84] Creating CNI manager for ""
	I1025 22:57:43.595436  728361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:43.597362  728361 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 22:57:43.598677  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 22:57:43.611172  728361 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 22:57:43.628657  728361 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:57:43.639416  728361 system_pods.go:59] 8 kube-system pods found
	I1025 22:57:43.639446  728361 system_pods.go:61] "coredns-7c65d6cfc9-gdn8b" [d930c360-1ecf-4a30-be3c-f3159e9e3c54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:57:43.639454  728361 system_pods.go:61] "etcd-newest-cni-357495" [d1e1ea54-c661-46f4-90ed-a8681b837c68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 22:57:43.639466  728361 system_pods.go:61] "kube-apiserver-newest-cni-357495" [e7bbb737-5813-4e39-bf0c-e51250663d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:57:43.639477  728361 system_pods.go:61] "kube-controller-manager-newest-cni-357495" [486c74ca-7ca3-4816-ab43-890d29b4face] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:57:43.639487  728361 system_pods.go:61] "kube-proxy-tmpdb" [406e84ac-bc0c-4eda-a0de-1417d866649f] Running
	I1025 22:57:43.639495  728361 system_pods.go:61] "kube-scheduler-newest-cni-357495" [50c6e5ea-ee14-477e-a94b-8284615777bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:57:43.639505  728361 system_pods.go:61] "metrics-server-6867b74b74-w9dvz" [fee8b8bd-7da6-4b8b-9804-67888b1fc868] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:43.639512  728361 system_pods.go:61] "storage-provisioner" [35061304-9e5a-4f66-875c-103f43be807a] Running
	I1025 22:57:43.639518  728361 system_pods.go:74] duration metric: took 10.839818ms to wait for pod list to return data ...
	I1025 22:57:43.639528  728361 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:57:43.646484  728361 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:57:43.646509  728361 node_conditions.go:123] node cpu capacity is 2
	I1025 22:57:43.646520  728361 node_conditions.go:105] duration metric: took 6.988285ms to run NodePressure ...
	I1025 22:57:43.646539  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:43.915625  728361 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:57:43.934000  728361 ops.go:34] apiserver oom_adj: -16
	I1025 22:57:43.934020  728361 kubeadm.go:597] duration metric: took 8.415258105s to restartPrimaryControlPlane
	I1025 22:57:43.934029  728361 kubeadm.go:394] duration metric: took 8.475239856s to StartCluster
	I1025 22:57:43.934049  728361 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:43.934116  728361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:43.935164  728361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:43.935405  728361 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:57:43.935533  728361 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 22:57:43.935636  728361 config.go:182] Loaded profile config "newest-cni-357495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:43.935668  728361 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-357495"
	I1025 22:57:43.935696  728361 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-357495"
	W1025 22:57:43.935713  728361 addons.go:243] addon storage-provisioner should already be in state true
	I1025 22:57:43.935727  728361 addons.go:69] Setting metrics-server=true in profile "newest-cni-357495"
	I1025 22:57:43.935749  728361 addons.go:234] Setting addon metrics-server=true in "newest-cni-357495"
	I1025 22:57:43.935753  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	W1025 22:57:43.935763  728361 addons.go:243] addon metrics-server should already be in state true
	I1025 22:57:43.935818  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	I1025 22:57:43.936205  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.936245  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.936283  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.935703  728361 addons.go:69] Setting default-storageclass=true in profile "newest-cni-357495"
	I1025 22:57:43.936320  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.936321  728361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-357495"
	I1025 22:57:43.935713  728361 addons.go:69] Setting dashboard=true in profile "newest-cni-357495"
	I1025 22:57:43.936591  728361 addons.go:234] Setting addon dashboard=true in "newest-cni-357495"
	W1025 22:57:43.936602  728361 addons.go:243] addon dashboard should already be in state true
	I1025 22:57:43.936637  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	I1025 22:57:43.936834  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.936873  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.937009  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.937048  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.937659  728361 out.go:177] * Verifying Kubernetes components...
	I1025 22:57:43.939144  728361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:43.955960  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I1025 22:57:43.956461  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.956979  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.957007  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.957063  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40771
	I1025 22:57:43.957440  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.957472  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.957898  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.957919  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.958078  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.958127  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.958280  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.958921  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.958970  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.960741  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35805
	I1025 22:57:43.961123  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.961708  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.961724  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.962094  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.962267  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:43.965281  728361 addons.go:234] Setting addon default-storageclass=true in "newest-cni-357495"
	W1025 22:57:43.965301  728361 addons.go:243] addon default-storageclass should already be in state true
	I1025 22:57:43.965333  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	I1025 22:57:43.965612  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.965651  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.967851  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I1025 22:57:43.968252  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.968859  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.968877  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.969297  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.969895  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.969938  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.978224  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37967
	I1025 22:57:43.980247  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I1025 22:57:43.991129  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1025 22:57:43.997786  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.997926  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.998540  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.998565  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.998646  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.998705  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.998729  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.998995  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.999070  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.999305  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:43.999365  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:43.999543  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.999565  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.999954  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:44.000573  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:44.000731  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:44.001562  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.002141  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.003847  728361 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1025 22:57:44.005301  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 22:57:44.005326  728361 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 22:57:44.005353  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.008444  728361 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:57:44.009433  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.009938  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.009962  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.010211  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.010419  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.010565  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.010672  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.014136  728361 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:44.014160  728361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:57:44.014183  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.017633  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.018066  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.018084  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.018360  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.018538  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.018671  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.018843  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.024748  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33567
	I1025 22:57:44.025455  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:44.025952  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:44.025974  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:44.027949  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:44.028345  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:44.030416  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.030592  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
	I1025 22:57:44.030623  728361 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:44.030636  728361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:57:44.030653  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.031671  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:44.032355  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:44.032380  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:44.033013  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:44.033268  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:44.034055  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.034580  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.034604  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.034914  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.035097  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.035108  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.035257  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.035424  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.037146  728361 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 22:57:44.038544  728361 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 22:57:42.138727  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:42.152525  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:42.152616  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:42.190900  726389 cri.go:89] found id: ""
	I1025 22:57:42.190935  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.190947  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:42.190955  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:42.191043  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:42.237668  726389 cri.go:89] found id: ""
	I1025 22:57:42.237698  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.237711  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:42.237720  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:42.237781  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:42.289049  726389 cri.go:89] found id: ""
	I1025 22:57:42.289077  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.289087  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:42.289096  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:42.289155  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:42.334276  726389 cri.go:89] found id: ""
	I1025 22:57:42.334306  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.334318  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:42.334327  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:42.334385  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:42.379295  726389 cri.go:89] found id: ""
	I1025 22:57:42.379317  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.379325  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:42.379331  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:42.379375  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:42.416452  726389 cri.go:89] found id: ""
	I1025 22:57:42.416484  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.416496  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:42.416504  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:42.416563  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:42.455290  726389 cri.go:89] found id: ""
	I1025 22:57:42.455324  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.455336  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:42.455352  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:42.455421  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:42.493367  726389 cri.go:89] found id: ""
	I1025 22:57:42.493396  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.493413  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:42.493426  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:42.493444  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:42.511673  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:42.511724  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:42.589951  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:42.589976  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:42.589994  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:42.697460  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:42.697498  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:42.757645  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:42.757672  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:44.039861  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 22:57:44.039876  728361 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 22:57:44.039902  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.043936  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.044280  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.044300  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.044646  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.044847  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.045047  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.045212  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.214968  728361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:57:44.230045  728361 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:57:44.230142  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:44.256130  728361 api_server.go:72] duration metric: took 320.677383ms to wait for apiserver process to appear ...
	I1025 22:57:44.256168  728361 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:57:44.256195  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:44.261782  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I1025 22:57:44.262769  728361 api_server.go:141] control plane version: v1.31.1
	I1025 22:57:44.262792  728361 api_server.go:131] duration metric: took 6.616839ms to wait for apiserver health ...
	I1025 22:57:44.262808  728361 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:57:44.268736  728361 system_pods.go:59] 8 kube-system pods found
	I1025 22:57:44.268771  728361 system_pods.go:61] "coredns-7c65d6cfc9-gdn8b" [d930c360-1ecf-4a30-be3c-f3159e9e3c54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:57:44.268782  728361 system_pods.go:61] "etcd-newest-cni-357495" [d1e1ea54-c661-46f4-90ed-a8681b837c68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 22:57:44.268794  728361 system_pods.go:61] "kube-apiserver-newest-cni-357495" [e7bbb737-5813-4e39-bf0c-e51250663d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:57:44.268802  728361 system_pods.go:61] "kube-controller-manager-newest-cni-357495" [486c74ca-7ca3-4816-ab43-890d29b4face] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:57:44.268811  728361 system_pods.go:61] "kube-proxy-tmpdb" [406e84ac-bc0c-4eda-a0de-1417d866649f] Running
	I1025 22:57:44.268824  728361 system_pods.go:61] "kube-scheduler-newest-cni-357495" [50c6e5ea-ee14-477e-a94b-8284615777bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:57:44.268835  728361 system_pods.go:61] "metrics-server-6867b74b74-w9dvz" [fee8b8bd-7da6-4b8b-9804-67888b1fc868] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:44.268844  728361 system_pods.go:61] "storage-provisioner" [35061304-9e5a-4f66-875c-103f43be807a] Running
	I1025 22:57:44.268853  728361 system_pods.go:74] duration metric: took 6.033238ms to wait for pod list to return data ...
	I1025 22:57:44.268865  728361 default_sa.go:34] waiting for default service account to be created ...
	I1025 22:57:44.274413  728361 default_sa.go:45] found service account: "default"
	I1025 22:57:44.274435  728361 default_sa.go:55] duration metric: took 5.560777ms for default service account to be created ...
	I1025 22:57:44.274448  728361 kubeadm.go:582] duration metric: took 339.005004ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 22:57:44.274466  728361 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:57:44.276931  728361 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:57:44.276950  728361 node_conditions.go:123] node cpu capacity is 2
	I1025 22:57:44.276977  728361 node_conditions.go:105] duration metric: took 2.502915ms to run NodePressure ...
	I1025 22:57:44.276992  728361 start.go:241] waiting for startup goroutines ...
	I1025 22:57:44.300122  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:44.327780  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 22:57:44.327815  728361 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 22:57:44.334907  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 22:57:44.334936  728361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 22:57:44.365482  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 22:57:44.365518  728361 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 22:57:44.376945  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:44.441691  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 22:57:44.441722  728361 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 22:57:44.443225  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 22:57:44.443247  728361 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 22:57:44.510983  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:44.511014  728361 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 22:57:44.522596  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 22:57:44.522631  728361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 22:57:44.593578  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:44.600368  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 22:57:44.600392  728361 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 22:57:44.687614  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 22:57:44.687642  728361 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 22:57:44.726363  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 22:57:44.726391  728361 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 22:57:44.771220  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 22:57:44.771247  728361 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 22:57:44.800050  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:44.800079  728361 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 22:57:44.875738  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:46.117050  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816877105s)
	I1025 22:57:46.117115  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.740124565s)
	I1025 22:57:46.117165  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117185  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117211  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.52359958s)
	I1025 22:57:46.117120  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117287  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117247  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117367  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117495  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.117543  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.117552  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.117560  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117567  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117623  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.117642  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.117663  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.117671  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.117679  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117687  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117713  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.117739  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.117751  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117767  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.120140  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.120155  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.120155  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.120172  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.120168  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.120191  728361 addons.go:475] Verifying addon metrics-server=true in "newest-cni-357495"
	I1025 22:57:46.120226  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.120252  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.120604  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.120614  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.137578  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.137598  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.137943  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.137945  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.137973  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.545157  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.669353935s)
	I1025 22:57:46.545231  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.545247  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.545621  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.545660  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.545679  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.545693  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.545954  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.545969  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.547693  728361 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-357495 addons enable metrics-server
	
	I1025 22:57:46.549219  728361 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I1025 22:57:46.550703  728361 addons.go:510] duration metric: took 2.615173183s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I1025 22:57:46.550752  728361 start.go:246] waiting for cluster config update ...
	I1025 22:57:46.550768  728361 start.go:255] writing updated cluster config ...
	I1025 22:57:46.551105  728361 ssh_runner.go:195] Run: rm -f paused
	I1025 22:57:46.603794  728361 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 22:57:46.605589  728361 out.go:177] * Done! kubectl is now configured to use "newest-cni-357495" cluster and "default" namespace by default
	I1025 22:57:45.312071  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:45.325800  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:45.325881  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:45.370543  726389 cri.go:89] found id: ""
	I1025 22:57:45.370572  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.370582  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:45.370590  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:45.370659  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:45.411970  726389 cri.go:89] found id: ""
	I1025 22:57:45.412009  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.412022  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:45.412032  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:45.412099  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:45.445037  726389 cri.go:89] found id: ""
	I1025 22:57:45.445073  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.445085  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:45.445094  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:45.445158  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:45.483563  726389 cri.go:89] found id: ""
	I1025 22:57:45.483595  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.483607  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:45.483615  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:45.483683  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:45.522944  726389 cri.go:89] found id: ""
	I1025 22:57:45.522978  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.522991  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:45.522999  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:45.523060  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:45.558055  726389 cri.go:89] found id: ""
	I1025 22:57:45.558086  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.558099  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:45.558107  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:45.558172  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:45.591533  726389 cri.go:89] found id: ""
	I1025 22:57:45.591564  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.591574  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:45.591581  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:45.591651  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:45.634951  726389 cri.go:89] found id: ""
	I1025 22:57:45.634985  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.634996  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:45.635009  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:45.635026  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:45.684807  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:45.684847  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:45.699038  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:45.699072  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:45.762687  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:45.762718  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:45.762736  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:45.851222  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:45.851265  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:48.389992  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:48.403774  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:48.403842  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:48.441883  726389 cri.go:89] found id: ""
	I1025 22:57:48.441908  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.441919  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:48.441929  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:48.441982  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:48.477527  726389 cri.go:89] found id: ""
	I1025 22:57:48.477550  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.477558  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:48.477564  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:48.477612  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:48.514457  726389 cri.go:89] found id: ""
	I1025 22:57:48.514489  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.514500  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:48.514510  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:48.514579  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:48.551264  726389 cri.go:89] found id: ""
	I1025 22:57:48.551296  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.551306  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:48.551312  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:48.551369  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:48.585426  726389 cri.go:89] found id: ""
	I1025 22:57:48.585454  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.585465  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:48.585473  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:48.585537  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:48.623734  726389 cri.go:89] found id: ""
	I1025 22:57:48.623772  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.623785  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:48.623794  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:48.623865  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:48.661170  726389 cri.go:89] found id: ""
	I1025 22:57:48.661207  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.661219  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:48.661227  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:48.661304  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:48.700776  726389 cri.go:89] found id: ""
	I1025 22:57:48.700803  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.700812  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:48.700825  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:48.700842  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:48.753294  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:48.753326  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:48.770412  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:48.770443  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:48.847535  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:48.847562  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:48.847577  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:48.920817  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:48.920862  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:51.460695  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:51.473870  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:51.473945  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:51.510350  726389 cri.go:89] found id: ""
	I1025 22:57:51.510383  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.510393  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:51.510406  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:51.510480  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:51.546705  726389 cri.go:89] found id: ""
	I1025 22:57:51.546742  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.546754  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:51.546762  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:51.546830  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:51.583728  726389 cri.go:89] found id: ""
	I1025 22:57:51.583759  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.583767  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:51.583774  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:51.583831  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:51.623229  726389 cri.go:89] found id: ""
	I1025 22:57:51.623260  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.623269  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:51.623275  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:51.623332  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:51.661673  726389 cri.go:89] found id: ""
	I1025 22:57:51.661700  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.661710  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:51.661716  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:51.661769  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:51.707516  726389 cri.go:89] found id: ""
	I1025 22:57:51.707551  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.707564  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:51.707572  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:51.707646  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:51.745242  726389 cri.go:89] found id: ""
	I1025 22:57:51.745277  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.745288  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:51.745295  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:51.745360  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:51.778136  726389 cri.go:89] found id: ""
	I1025 22:57:51.778165  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.778180  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:51.778193  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:51.778210  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:51.826323  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:51.826365  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:51.839635  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:51.839673  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:51.905218  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:51.905242  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:51.905260  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:51.979641  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:51.979680  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:54.519362  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:54.532482  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:54.532560  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:54.566193  726389 cri.go:89] found id: ""
	I1025 22:57:54.566221  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.566232  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:54.566240  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:54.566304  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:54.602139  726389 cri.go:89] found id: ""
	I1025 22:57:54.602166  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.602178  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:54.602187  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:54.602245  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:54.636484  726389 cri.go:89] found id: ""
	I1025 22:57:54.636519  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.636529  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:54.636545  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:54.636610  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:54.670617  726389 cri.go:89] found id: ""
	I1025 22:57:54.670649  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.670660  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:54.670666  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:54.670726  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:54.702360  726389 cri.go:89] found id: ""
	I1025 22:57:54.702400  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.702412  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:54.702420  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:54.702491  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:54.736101  726389 cri.go:89] found id: ""
	I1025 22:57:54.736140  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.736153  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:54.736161  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:54.736225  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:54.768706  726389 cri.go:89] found id: ""
	I1025 22:57:54.768744  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.768757  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:54.768766  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:54.768828  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:54.800919  726389 cri.go:89] found id: ""
	I1025 22:57:54.800965  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.800978  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:54.800989  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:54.801008  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:54.866242  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:54.866277  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:54.866294  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:54.942084  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:54.942127  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:54.979383  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:54.979422  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:55.029227  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:55.029269  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:57.543312  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:57.557090  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:57.557176  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:57.594813  726389 cri.go:89] found id: ""
	I1025 22:57:57.594847  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.594860  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:57.594868  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:57.594933  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:57.629736  726389 cri.go:89] found id: ""
	I1025 22:57:57.629769  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.629781  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:57.629790  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:57.629855  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:57.663895  726389 cri.go:89] found id: ""
	I1025 22:57:57.663927  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.663935  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:57.663940  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:57.663991  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:57.696122  726389 cri.go:89] found id: ""
	I1025 22:57:57.696153  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.696164  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:57.696171  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:57.696238  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:57.733740  726389 cri.go:89] found id: ""
	I1025 22:57:57.733769  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.733778  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:57.733785  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:57.733839  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:57.766855  726389 cri.go:89] found id: ""
	I1025 22:57:57.766886  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.766897  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:57.766905  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:57.766971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:57.804080  726389 cri.go:89] found id: ""
	I1025 22:57:57.804110  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.804118  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:57.804125  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:57.804178  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:57.837482  726389 cri.go:89] found id: ""
	I1025 22:57:57.837511  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.837520  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:57.837530  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:57.837542  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:57.889217  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:57.889265  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:57.902999  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:57.903039  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:57.968303  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:57.968327  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:57.968345  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:58.046929  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:58.046981  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:00.589410  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:00.602271  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:00.602344  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:00.635947  726389 cri.go:89] found id: ""
	I1025 22:58:00.635980  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.635989  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:00.635995  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:00.636057  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:00.668039  726389 cri.go:89] found id: ""
	I1025 22:58:00.668072  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.668083  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:00.668092  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:00.668163  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:00.700889  726389 cri.go:89] found id: ""
	I1025 22:58:00.700916  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.700925  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:00.700931  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:00.701026  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:00.734409  726389 cri.go:89] found id: ""
	I1025 22:58:00.734440  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.734452  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:00.734459  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:00.734527  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:00.770435  726389 cri.go:89] found id: ""
	I1025 22:58:00.770462  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.770469  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:00.770476  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:00.770535  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:00.803431  726389 cri.go:89] found id: ""
	I1025 22:58:00.803466  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.803477  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:00.803486  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:00.803552  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:00.837896  726389 cri.go:89] found id: ""
	I1025 22:58:00.837932  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.837943  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:00.837951  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:00.838025  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:00.875375  726389 cri.go:89] found id: ""
	I1025 22:58:00.875414  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.875425  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:00.875437  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:00.875453  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:00.925019  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:00.925057  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:00.938018  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:00.938050  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:01.008170  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:01.008199  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:01.008216  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:01.082487  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:01.082530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:03.623673  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:03.637286  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:03.637371  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:03.673836  726389 cri.go:89] found id: ""
	I1025 22:58:03.673884  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.673897  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:03.673906  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:03.673971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:03.706700  726389 cri.go:89] found id: ""
	I1025 22:58:03.706731  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.706742  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:03.706750  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:03.706818  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:03.738775  726389 cri.go:89] found id: ""
	I1025 22:58:03.738804  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.738815  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:03.738823  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:03.738889  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:03.770246  726389 cri.go:89] found id: ""
	I1025 22:58:03.770274  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.770284  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:03.770292  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:03.770366  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:03.811193  726389 cri.go:89] found id: ""
	I1025 22:58:03.811222  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.811231  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:03.811237  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:03.811290  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:03.842644  726389 cri.go:89] found id: ""
	I1025 22:58:03.842678  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.842686  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:03.842693  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:03.842750  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:03.874753  726389 cri.go:89] found id: ""
	I1025 22:58:03.874780  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.874788  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:03.874794  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:03.874845  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:03.907133  726389 cri.go:89] found id: ""
	I1025 22:58:03.907162  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.907173  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:03.907186  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:03.907202  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:03.957250  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:03.957287  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:03.970381  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:03.970408  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:04.033620  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:04.033647  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:04.033663  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:04.108254  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:04.108296  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:06.647214  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:06.660871  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:06.660942  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:06.694191  726389 cri.go:89] found id: ""
	I1025 22:58:06.694223  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.694232  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:06.694243  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:06.694295  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:06.728177  726389 cri.go:89] found id: ""
	I1025 22:58:06.728209  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.728222  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:06.728229  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:06.728300  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:06.761968  726389 cri.go:89] found id: ""
	I1025 22:58:06.762003  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.762015  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:06.762022  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:06.762089  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:06.794139  726389 cri.go:89] found id: ""
	I1025 22:58:06.794172  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.794186  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:06.794195  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:06.794261  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:06.830436  726389 cri.go:89] found id: ""
	I1025 22:58:06.830468  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.830481  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:06.830490  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:06.830557  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:06.865350  726389 cri.go:89] found id: ""
	I1025 22:58:06.865391  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.865405  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:06.865412  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:06.865468  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:06.899259  726389 cri.go:89] found id: ""
	I1025 22:58:06.899288  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.899298  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:06.899304  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:06.899354  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:06.930753  726389 cri.go:89] found id: ""
	I1025 22:58:06.930784  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.930793  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:06.930802  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:06.930813  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:06.943437  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:06.943464  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:07.012837  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:07.012862  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:07.012875  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:07.085555  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:07.085606  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:07.125421  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:07.125464  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:09.678235  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:09.691802  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:09.691884  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:09.730774  726389 cri.go:89] found id: ""
	I1025 22:58:09.730813  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.730826  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:09.730838  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:09.730893  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:09.768841  726389 cri.go:89] found id: ""
	I1025 22:58:09.768878  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.768894  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:09.768903  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:09.768984  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:09.802970  726389 cri.go:89] found id: ""
	I1025 22:58:09.803001  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.803013  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:09.803022  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:09.803093  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:09.835041  726389 cri.go:89] found id: ""
	I1025 22:58:09.835075  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.835087  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:09.835095  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:09.835148  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:09.868561  726389 cri.go:89] found id: ""
	I1025 22:58:09.868590  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.868601  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:09.868609  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:09.868689  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:09.901694  726389 cri.go:89] found id: ""
	I1025 22:58:09.901721  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.901730  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:09.901737  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:09.901793  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:09.936138  726389 cri.go:89] found id: ""
	I1025 22:58:09.936167  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.936178  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:09.936187  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:09.936250  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:09.969041  726389 cri.go:89] found id: ""
	I1025 22:58:09.969075  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.969087  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:09.969100  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:09.969115  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:10.036786  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:10.036816  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:10.036832  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:10.108946  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:10.109015  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:10.150241  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:10.150278  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:10.201815  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:10.201862  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:12.715673  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:12.729286  726389 kubeadm.go:597] duration metric: took 4m4.085037105s to restartPrimaryControlPlane
	W1025 22:58:12.729380  726389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 22:58:12.729407  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 22:58:13.183339  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:58:13.197871  726389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:58:13.207895  726389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:58:13.217907  726389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:58:13.217929  726389 kubeadm.go:157] found existing configuration files:
	
	I1025 22:58:13.217990  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:58:13.227351  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:58:13.227422  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:58:13.237158  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:58:13.246361  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:58:13.246431  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:58:13.256260  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:58:13.265821  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:58:13.265885  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:58:13.275535  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:58:13.284737  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:58:13.284804  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:58:13.294340  726389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 22:58:13.357520  726389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 22:58:13.357618  726389 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 22:58:13.492934  726389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:58:13.493109  726389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:58:13.493237  726389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:58:13.676988  726389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:58:13.679089  726389 out.go:235]   - Generating certificates and keys ...
	I1025 22:58:13.679191  726389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 22:58:13.679294  726389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 22:58:13.679410  726389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 22:58:13.679499  726389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 22:58:13.679591  726389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 22:58:13.679673  726389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 22:58:13.679773  726389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 22:58:13.679860  726389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 22:58:13.679958  726389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 22:58:13.680063  726389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 22:58:13.680117  726389 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 22:58:13.680195  726389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:58:13.792687  726389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:58:13.867665  726389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:58:14.014215  726389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:58:14.157457  726389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:58:14.181574  726389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:58:14.181693  726389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:58:14.181766  726389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 22:58:14.322320  726389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:58:14.324285  726389 out.go:235]   - Booting up control plane ...
	I1025 22:58:14.324402  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:58:14.328027  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:58:14.331034  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:58:14.332233  726389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:58:14.340260  726389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:58:54.338405  726389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:58:54.338592  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:58:54.338841  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:58:59.339365  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:58:59.339661  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:59:09.340395  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:59:09.340593  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:59:29.341629  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:59:29.341864  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:09.342793  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:09.343142  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:09.343171  726389 kubeadm.go:310] 
	I1025 23:00:09.343244  726389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 23:00:09.343309  726389 kubeadm.go:310] 		timed out waiting for the condition
	I1025 23:00:09.343320  726389 kubeadm.go:310] 
	I1025 23:00:09.343358  726389 kubeadm.go:310] 	This error is likely caused by:
	I1025 23:00:09.343390  726389 kubeadm.go:310] 		- The kubelet is not running
	I1025 23:00:09.343481  726389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 23:00:09.343489  726389 kubeadm.go:310] 
	I1025 23:00:09.343609  726389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 23:00:09.343655  726389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 23:00:09.343701  726389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 23:00:09.343711  726389 kubeadm.go:310] 
	I1025 23:00:09.343811  726389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 23:00:09.343886  726389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 23:00:09.343898  726389 kubeadm.go:310] 
	I1025 23:00:09.344020  726389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 23:00:09.344148  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 23:00:09.344258  726389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 23:00:09.344355  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 23:00:09.344365  726389 kubeadm.go:310] 
	I1025 23:00:09.345056  726389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 23:00:09.345170  726389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 23:00:09.345261  726389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1025 23:00:09.345502  726389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 23:00:09.345550  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 23:00:09.805116  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 23:00:09.820225  726389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 23:00:09.829679  726389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 23:00:09.829702  726389 kubeadm.go:157] found existing configuration files:
	
	I1025 23:00:09.829756  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 23:00:09.838792  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 23:00:09.838857  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 23:00:09.847823  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 23:00:09.856364  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 23:00:09.856422  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 23:00:09.865400  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 23:00:09.873766  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 23:00:09.873827  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 23:00:09.882969  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 23:00:09.891527  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 23:00:09.891606  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 23:00:09.900940  726389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 23:00:09.969506  726389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 23:00:09.969568  726389 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 23:00:10.115097  726389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 23:00:10.115224  726389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 23:00:10.115397  726389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 23:00:10.293601  726389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 23:00:10.296142  726389 out.go:235]   - Generating certificates and keys ...
	I1025 23:00:10.296255  726389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 23:00:10.296361  726389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 23:00:10.296502  726389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 23:00:10.296583  726389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 23:00:10.296676  726389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 23:00:10.296748  726389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 23:00:10.296840  726389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 23:00:10.296949  726389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 23:00:10.297071  726389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 23:00:10.297182  726389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 23:00:10.297236  726389 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 23:00:10.297334  726389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 23:00:10.411124  726389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 23:00:10.530014  726389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 23:00:10.624647  726389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 23:00:10.777858  726389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 23:00:10.797014  726389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 23:00:10.798078  726389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 23:00:10.798168  726389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 23:00:10.940610  726389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 23:00:10.942427  726389 out.go:235]   - Booting up control plane ...
	I1025 23:00:10.942572  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 23:00:10.959667  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 23:00:10.959757  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 23:00:10.959910  726389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 23:00:10.963884  726389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 23:00:50.966097  726389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 23:00:50.966211  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:50.966448  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:55.966794  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:55.967051  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:01:05.967421  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:01:05.967674  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:01:25.968507  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:01:25.968765  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:02:05.969405  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:02:05.969627  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:02:05.969639  726389 kubeadm.go:310] 
	I1025 23:02:05.969676  726389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 23:02:05.969777  726389 kubeadm.go:310] 		timed out waiting for the condition
	I1025 23:02:05.969821  726389 kubeadm.go:310] 
	I1025 23:02:05.969885  726389 kubeadm.go:310] 	This error is likely caused by:
	I1025 23:02:05.969935  726389 kubeadm.go:310] 		- The kubelet is not running
	I1025 23:02:05.970078  726389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 23:02:05.970092  726389 kubeadm.go:310] 
	I1025 23:02:05.970248  726389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 23:02:05.970290  726389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 23:02:05.970375  726389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 23:02:05.970388  726389 kubeadm.go:310] 
	I1025 23:02:05.970517  726389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 23:02:05.970595  726389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 23:02:05.970602  726389 kubeadm.go:310] 
	I1025 23:02:05.970729  726389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 23:02:05.970840  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 23:02:05.970914  726389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 23:02:05.971019  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 23:02:05.971031  726389 kubeadm.go:310] 
	I1025 23:02:05.971808  726389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 23:02:05.971923  726389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 23:02:05.972087  726389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1025 23:02:05.972124  726389 kubeadm.go:394] duration metric: took 7m57.377970738s to StartCluster
	I1025 23:02:05.972182  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 23:02:05.972244  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 23:02:06.012800  726389 cri.go:89] found id: ""
	I1025 23:02:06.012837  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.012852  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 23:02:06.012860  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 23:02:06.012925  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 23:02:06.051712  726389 cri.go:89] found id: ""
	I1025 23:02:06.051748  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.051761  726389 logs.go:284] No container was found matching "etcd"
	I1025 23:02:06.051769  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 23:02:06.051834  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 23:02:06.084904  726389 cri.go:89] found id: ""
	I1025 23:02:06.084939  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.084950  726389 logs.go:284] No container was found matching "coredns"
	I1025 23:02:06.084973  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 23:02:06.085056  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 23:02:06.120083  726389 cri.go:89] found id: ""
	I1025 23:02:06.120121  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.120133  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 23:02:06.120140  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 23:02:06.120197  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 23:02:06.154172  726389 cri.go:89] found id: ""
	I1025 23:02:06.154197  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.154205  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 23:02:06.154211  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 23:02:06.154261  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 23:02:06.187085  726389 cri.go:89] found id: ""
	I1025 23:02:06.187130  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.187143  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 23:02:06.187152  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 23:02:06.187220  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 23:02:06.220391  726389 cri.go:89] found id: ""
	I1025 23:02:06.220421  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.220430  726389 logs.go:284] No container was found matching "kindnet"
	I1025 23:02:06.220437  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 23:02:06.220503  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 23:02:06.254240  726389 cri.go:89] found id: ""
	I1025 23:02:06.254274  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.254286  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 23:02:06.254301  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 23:02:06.254340  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 23:02:06.301861  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 23:02:06.301907  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 23:02:06.315888  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 23:02:06.315919  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 23:02:06.386034  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 23:02:06.386073  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 23:02:06.386091  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 23:02:06.487167  726389 logs.go:123] Gathering logs for container status ...
	I1025 23:02:06.487216  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 23:02:06.539615  726389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 23:02:06.539690  726389 out.go:270] * 
	W1025 23:02:06.539895  726389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 23:02:06.539922  726389 out.go:270] * 
	W1025 23:02:06.540790  726389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 23:02:06.545196  726389 out.go:201] 
	W1025 23:02:06.546506  726389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 23:02:06.546544  726389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 23:02:06.546564  726389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 23:02:06.548055  726389 out.go:201] 
	
	
	==> CRI-O <==
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.622172009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729897327622145960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=920796d1-7b82-41ad-8ec4-c98110b23e78 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.622810086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf553791-5c11-4ad8-a5ab-f71ffc4ab8ac name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.622893578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf553791-5c11-4ad8-a5ab-f71ffc4ab8ac name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.622943478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bf553791-5c11-4ad8-a5ab-f71ffc4ab8ac name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.661361125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f99169b-912c-4f88-b850-fb3ed3114a78 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.661454589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f99169b-912c-4f88-b850-fb3ed3114a78 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.662859004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2e4bb4a-3bb7-4e54-a240-8954c4870820 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.663248439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729897327663225385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2e4bb4a-3bb7-4e54-a240-8954c4870820 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.663893191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37f7b037-4dda-4672-a0ca-0f86e1bf82c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.663943987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37f7b037-4dda-4672-a0ca-0f86e1bf82c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.663974490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=37f7b037-4dda-4672-a0ca-0f86e1bf82c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.700594385Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b5a1e7f-0c5e-4889-9103-f6c18b99c3a8 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.700744267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b5a1e7f-0c5e-4889-9103-f6c18b99c3a8 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.701964197Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1f04e4e-0c34-47dc-99cf-27e31eaee5b8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.702331565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729897327702310646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1f04e4e-0c34-47dc-99cf-27e31eaee5b8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.702870112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2617b61-77da-4107-a8f8-8baf3feb8931 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.702940040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2617b61-77da-4107-a8f8-8baf3feb8931 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.702974642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f2617b61-77da-4107-a8f8-8baf3feb8931 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.734485676Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9147888-58dd-42f4-9ab2-521296a0d792 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.734574087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9147888-58dd-42f4-9ab2-521296a0d792 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.735987931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d71bdc6e-11b6-446e-9311-f348bd06f0ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.736375171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729897327736351986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d71bdc6e-11b6-446e-9311-f348bd06f0ad name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.736973580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dae21ae-2e23-4f5d-9836-0215d7f0cab9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.737058434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dae21ae-2e23-4f5d-9836-0215d7f0cab9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:02:07 old-k8s-version-005932 crio[631]: time="2024-10-25 23:02:07.737096194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7dae21ae-2e23-4f5d-9836-0215d7f0cab9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct25 22:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053538] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043103] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.993033] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.634497] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.621665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct25 22:54] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.064930] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061174] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.184894] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.167513] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.254112] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.419742] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.063304] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.826111] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +11.981319] kauditd_printk_skb: 46 callbacks suppressed
	[Oct25 22:58] systemd-fstab-generator[5077]: Ignoring "noauto" option for root device
	[Oct25 23:00] systemd-fstab-generator[5358]: Ignoring "noauto" option for root device
	[  +0.059452] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:02:07 up 8 min,  0 users,  load average: 0.06, 0.10, 0.08
	Linux old-k8s-version-005932 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00096bef0)
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009bbef0, 0x4f0ac20, 0xc000b3d950, 0x1, 0xc0001000c0)
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00092a2a0, 0xc0001000c0)
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a48ff0, 0xc000913040)
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 25 23:02:05 old-k8s-version-005932 kubelet[5536]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 25 23:02:05 old-k8s-version-005932 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 25 23:02:05 old-k8s-version-005932 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 25 23:02:06 old-k8s-version-005932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 25 23:02:06 old-k8s-version-005932 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 25 23:02:06 old-k8s-version-005932 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 25 23:02:06 old-k8s-version-005932 kubelet[5595]: I1025 23:02:06.484303    5595 server.go:416] Version: v1.20.0
	Oct 25 23:02:06 old-k8s-version-005932 kubelet[5595]: I1025 23:02:06.484570    5595 server.go:837] Client rotation is on, will bootstrap in background
	Oct 25 23:02:06 old-k8s-version-005932 kubelet[5595]: I1025 23:02:06.486657    5595 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 25 23:02:06 old-k8s-version-005932 kubelet[5595]: I1025 23:02:06.487737    5595 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 25 23:02:06 old-k8s-version-005932 kubelet[5595]: W1025 23:02:06.487743    5595 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 2 (226.619727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-005932" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (508.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:02:12.137049  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/default-k8s-diff-port-166447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:02:22.936653  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:02:57.096871  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:03:06.945902  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:04:06.912530  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:04:13.839727  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/no-preload-657458/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:04:28.274238  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/default-k8s-diff-port-166447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:04:40.902581  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:04:41.543664  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/no-preload-657458/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:04:42.623911  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:04:55.978698  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/default-k8s-diff-port-166447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:05:20.932372  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:05:29.977238  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:05:47.013454  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:06:05.687342  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:06:43.996925  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:06:47.714032  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:07:10.080202  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:07:22.936936  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:07:57.097136  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:08:06.946165  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:08:10.780925  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:08:45.998755  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:09:06.912230  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:09:13.839859  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/no-preload-657458/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:09:20.163399  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:09:28.274235  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/default-k8s-diff-port-166447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:09:40.902566  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:09:42.623498  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:10:20.932356  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:10:47.013498  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 2 (227.702666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-005932" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 2 (212.971131ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-005932 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-601894 image list                          | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| delete  | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| start   | -p newest-cni-357495 --memory=2200 --alsologtostderr   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-657458 image list                           | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| delete  | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| addons  | enable metrics-server -p newest-cni-357495             | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:57 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-357495                  | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-357495 --memory=2200 --alsologtostderr   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-166447                           | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	| image   | newest-cni-357495 image list                           | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	| delete  | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 22:57:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 22:57:09.006096  728361 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:57:09.006201  728361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:57:09.006209  728361 out.go:358] Setting ErrFile to fd 2...
	I1025 22:57:09.006214  728361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:57:09.006451  728361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:57:09.006988  728361 out.go:352] Setting JSON to false
	I1025 22:57:09.007986  728361 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20373,"bootTime":1729876656,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:57:09.008093  728361 start.go:139] virtualization: kvm guest
	I1025 22:57:09.010465  728361 out.go:177] * [newest-cni-357495] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:57:09.011802  728361 notify.go:220] Checking for updates...
	I1025 22:57:09.011839  728361 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 22:57:09.013146  728361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:57:09.014475  728361 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:09.015727  728361 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:57:09.016972  728361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:57:09.018210  728361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:57:09.019736  728361 config.go:182] Loaded profile config "newest-cni-357495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:09.020150  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:09.020224  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:09.035482  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1025 22:57:09.035920  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:09.036595  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:09.036617  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:09.037009  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:09.037247  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:09.037593  728361 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 22:57:09.037912  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:09.037954  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:09.053072  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33229
	I1025 22:57:09.053595  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:09.054218  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:09.054244  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:09.054588  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:09.054779  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:09.090073  728361 out.go:177] * Using the kvm2 driver based on existing profile
	I1025 22:57:09.091244  728361 start.go:297] selected driver: kvm2
	I1025 22:57:09.091260  728361 start.go:901] validating driver "kvm2" against &{Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:57:09.091400  728361 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:57:09.092078  728361 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:57:09.092162  728361 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 22:57:09.107070  728361 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 22:57:09.107505  728361 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 22:57:09.107537  728361 cni.go:84] Creating CNI manager for ""
	I1025 22:57:09.107588  728361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:09.107626  728361 start.go:340] cluster config:
	{Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-357495 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:57:09.107743  728361 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:57:09.109586  728361 out.go:177] * Starting "newest-cni-357495" primary control-plane node in "newest-cni-357495" cluster
	I1025 22:57:09.110853  728361 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 22:57:09.110886  728361 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1025 22:57:09.110896  728361 cache.go:56] Caching tarball of preloaded images
	I1025 22:57:09.111001  728361 preload.go:172] Found /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 22:57:09.111015  728361 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1025 22:57:09.111159  728361 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/config.json ...
	I1025 22:57:09.111340  728361 start.go:360] acquireMachinesLock for newest-cni-357495: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 22:57:09.111385  728361 start.go:364] duration metric: took 26.544µs to acquireMachinesLock for "newest-cni-357495"
	I1025 22:57:09.111405  728361 start.go:96] Skipping create...Using existing machine configuration
	I1025 22:57:09.111420  728361 fix.go:54] fixHost starting: 
	I1025 22:57:09.111679  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:09.111715  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:09.126695  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I1025 22:57:09.127148  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:09.127662  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:09.127683  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:09.128015  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:09.128203  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:09.128345  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:09.129983  728361 fix.go:112] recreateIfNeeded on newest-cni-357495: state=Stopped err=<nil>
	I1025 22:57:09.130022  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	W1025 22:57:09.130181  728361 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 22:57:09.131768  728361 out.go:177] * Restarting existing kvm2 VM for "newest-cni-357495" ...
	I1025 22:57:04.664834  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:04.677759  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:04.677820  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:04.710557  726389 cri.go:89] found id: ""
	I1025 22:57:04.710585  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.710594  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:04.710601  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:04.710655  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:04.747197  726389 cri.go:89] found id: ""
	I1025 22:57:04.747225  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.747234  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:04.747240  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:04.747288  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:04.787986  726389 cri.go:89] found id: ""
	I1025 22:57:04.788018  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.788027  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:04.788034  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:04.788091  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:04.819796  726389 cri.go:89] found id: ""
	I1025 22:57:04.819824  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.819833  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:04.819839  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:04.819887  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:04.856885  726389 cri.go:89] found id: ""
	I1025 22:57:04.856925  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.856938  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:04.856946  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:04.857021  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:04.901723  726389 cri.go:89] found id: ""
	I1025 22:57:04.901759  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.901770  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:04.901779  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:04.901846  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:04.943775  726389 cri.go:89] found id: ""
	I1025 22:57:04.943810  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.943821  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:04.943830  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:04.943893  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:04.985957  726389 cri.go:89] found id: ""
	I1025 22:57:04.985982  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.985991  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:04.986000  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:04.986012  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:05.061490  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:05.061529  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:05.103028  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:05.103059  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:05.152607  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:05.152644  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:05.167577  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:05.167624  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:05.246428  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:07.747514  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:07.764567  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:07.764653  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:07.804356  726389 cri.go:89] found id: ""
	I1025 22:57:07.804453  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.804479  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:07.804498  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:07.804594  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:07.852155  726389 cri.go:89] found id: ""
	I1025 22:57:07.852190  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.852201  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:07.852210  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:07.852287  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:07.906149  726389 cri.go:89] found id: ""
	I1025 22:57:07.906195  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.906209  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:07.906237  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:07.906321  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:07.946134  726389 cri.go:89] found id: ""
	I1025 22:57:07.946165  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.946177  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:07.946189  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:07.946257  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:07.994191  726389 cri.go:89] found id: ""
	I1025 22:57:07.994225  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.994243  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:07.994252  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:07.994324  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:08.039254  726389 cri.go:89] found id: ""
	I1025 22:57:08.039284  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.039296  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:08.039303  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:08.039370  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:08.083985  726389 cri.go:89] found id: ""
	I1025 22:57:08.084016  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.084027  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:08.084034  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:08.084100  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:08.121051  726389 cri.go:89] found id: ""
	I1025 22:57:08.121084  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.121096  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:08.121111  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:08.121128  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:08.210698  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:08.210743  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:08.251297  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:08.251326  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:08.309007  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:08.309049  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:08.323243  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:08.323281  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:08.395704  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:06.985771  725359 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001894992s
	I1025 22:57:06.985860  725359 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1025 22:57:11.989818  725359 kubeadm.go:310] [api-check] The API server is healthy after 5.002310213s
	I1025 22:57:12.000090  725359 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 22:57:12.029347  725359 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 22:57:12.065009  725359 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 22:57:12.065298  725359 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-166447 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 22:57:12.080390  725359 kubeadm.go:310] [bootstrap-token] Using token: gn84c5.mnibhpx86csafbn4
	I1025 22:57:12.081888  725359 out.go:235]   - Configuring RBAC rules ...
	I1025 22:57:12.082040  725359 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 22:57:12.094696  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 22:57:12.107652  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 22:57:12.112673  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 22:57:12.118594  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 22:57:12.131842  725359 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 22:57:12.397191  725359 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 22:57:12.821901  725359 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 22:57:13.393906  725359 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 22:57:13.394919  725359 kubeadm.go:310] 
	I1025 22:57:13.395007  725359 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 22:57:13.395019  725359 kubeadm.go:310] 
	I1025 22:57:13.395120  725359 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 22:57:13.395130  725359 kubeadm.go:310] 
	I1025 22:57:13.395163  725359 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 22:57:13.395252  725359 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 22:57:13.395324  725359 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 22:57:13.395333  725359 kubeadm.go:310] 
	I1025 22:57:13.395388  725359 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 22:57:13.395398  725359 kubeadm.go:310] 
	I1025 22:57:13.395460  725359 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 22:57:13.395470  725359 kubeadm.go:310] 
	I1025 22:57:13.395533  725359 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 22:57:13.395623  725359 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 22:57:13.395711  725359 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 22:57:13.395735  725359 kubeadm.go:310] 
	I1025 22:57:13.395856  725359 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 22:57:13.395977  725359 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 22:57:13.395991  725359 kubeadm.go:310] 
	I1025 22:57:13.396103  725359 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gn84c5.mnibhpx86csafbn4 \
	I1025 22:57:13.396257  725359 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a \
	I1025 22:57:13.396290  725359 kubeadm.go:310] 	--control-plane 
	I1025 22:57:13.396299  725359 kubeadm.go:310] 
	I1025 22:57:13.396418  725359 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 22:57:13.396428  725359 kubeadm.go:310] 
	I1025 22:57:13.396539  725359 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gn84c5.mnibhpx86csafbn4 \
	I1025 22:57:13.396691  725359 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a 
	I1025 22:57:13.397292  725359 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:57:13.397395  725359 cni.go:84] Creating CNI manager for ""
	I1025 22:57:13.397415  725359 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:13.399132  725359 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 22:57:09.132799  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Start
	I1025 22:57:09.133007  728361 main.go:141] libmachine: (newest-cni-357495) starting domain...
	I1025 22:57:09.133028  728361 main.go:141] libmachine: (newest-cni-357495) ensuring networks are active...
	I1025 22:57:09.133784  728361 main.go:141] libmachine: (newest-cni-357495) Ensuring network default is active
	I1025 22:57:09.134127  728361 main.go:141] libmachine: (newest-cni-357495) Ensuring network mk-newest-cni-357495 is active
	I1025 22:57:09.134535  728361 main.go:141] libmachine: (newest-cni-357495) getting domain XML...
	I1025 22:57:09.135259  728361 main.go:141] libmachine: (newest-cni-357495) creating domain...
	I1025 22:57:10.376675  728361 main.go:141] libmachine: (newest-cni-357495) waiting for IP...
	I1025 22:57:10.377919  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:10.378434  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:10.378529  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:10.378420  728395 retry.go:31] will retry after 234.774904ms: waiting for domain to come up
	I1025 22:57:10.615044  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:10.615713  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:10.615744  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:10.615692  728395 retry.go:31] will retry after 344.301388ms: waiting for domain to come up
	I1025 22:57:10.961349  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:10.961953  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:10.961987  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:10.961901  728395 retry.go:31] will retry after 439.472335ms: waiting for domain to come up
	I1025 22:57:11.403081  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:11.403801  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:11.403833  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:11.403754  728395 retry.go:31] will retry after 603.917881ms: waiting for domain to come up
	I1025 22:57:12.009100  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:12.009791  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:12.009816  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:12.009766  728395 retry.go:31] will retry after 654.012412ms: waiting for domain to come up
	I1025 22:57:12.665694  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:12.666298  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:12.666331  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:12.666254  728395 retry.go:31] will retry after 598.223644ms: waiting for domain to come up
	I1025 22:57:13.266161  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:13.266714  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:13.266746  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:13.266670  728395 retry.go:31] will retry after 807.374369ms: waiting for domain to come up
	I1025 22:57:10.896885  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:10.912430  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:10.912544  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:10.949298  726389 cri.go:89] found id: ""
	I1025 22:57:10.949332  726389 logs.go:282] 0 containers: []
	W1025 22:57:10.949345  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:10.949356  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:10.949420  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:10.992906  726389 cri.go:89] found id: ""
	I1025 22:57:10.992941  726389 logs.go:282] 0 containers: []
	W1025 22:57:10.992963  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:10.992972  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:10.993037  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:11.035283  726389 cri.go:89] found id: ""
	I1025 22:57:11.035312  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.035321  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:11.035329  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:11.035391  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:11.075912  726389 cri.go:89] found id: ""
	I1025 22:57:11.075945  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.075957  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:11.075966  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:11.076031  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:11.116675  726389 cri.go:89] found id: ""
	I1025 22:57:11.116709  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.116721  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:11.116727  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:11.116788  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:11.153210  726389 cri.go:89] found id: ""
	I1025 22:57:11.153244  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.153258  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:11.153267  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:11.153331  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:11.195233  726389 cri.go:89] found id: ""
	I1025 22:57:11.195266  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.195278  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:11.195285  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:11.195346  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:11.237164  726389 cri.go:89] found id: ""
	I1025 22:57:11.237195  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.237206  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:11.237219  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:11.237236  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:11.299994  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:11.300043  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:11.316006  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:11.316055  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:11.404343  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:11.404368  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:11.404384  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:11.496349  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:11.496391  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:14.050229  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:14.064529  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:14.064615  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:14.101831  726389 cri.go:89] found id: ""
	I1025 22:57:14.101863  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.101877  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:14.101886  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:14.101950  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:14.139876  726389 cri.go:89] found id: ""
	I1025 22:57:14.139906  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.139915  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:14.139921  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:14.139982  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:14.175405  726389 cri.go:89] found id: ""
	I1025 22:57:14.175442  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.175454  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:14.175463  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:14.175535  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:14.220337  726389 cri.go:89] found id: ""
	I1025 22:57:14.220372  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.220392  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:14.220400  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:14.220471  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:14.262358  726389 cri.go:89] found id: ""
	I1025 22:57:14.262384  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.262393  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:14.262399  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:14.262457  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:14.303586  726389 cri.go:89] found id: ""
	I1025 22:57:14.303621  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.303629  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:14.303636  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:14.303687  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:14.343365  726389 cri.go:89] found id: ""
	I1025 22:57:14.343399  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.343411  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:14.343421  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:14.343494  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:14.376842  726389 cri.go:89] found id: ""
	I1025 22:57:14.376879  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.376892  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:14.376905  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:14.376921  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:14.426780  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:14.426819  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:14.439976  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:14.440007  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:14.512226  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:14.512258  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:14.512276  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:14.588240  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:14.588284  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:13.400319  725359 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 22:57:13.410568  725359 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 22:57:13.431208  725359 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:57:13.431301  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:13.431322  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-166447 minikube.k8s.io/updated_at=2024_10_25T22_57_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=default-k8s-diff-port-166447 minikube.k8s.io/primary=true
	I1025 22:57:13.639716  725359 ops.go:34] apiserver oom_adj: -16
	I1025 22:57:13.639860  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:14.140884  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:14.639916  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:15.140843  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:15.640888  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:16.140691  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:16.640258  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:17.140873  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:17.640232  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:17.748262  725359 kubeadm.go:1113] duration metric: took 4.317031918s to wait for elevateKubeSystemPrivileges
	I1025 22:57:17.748310  725359 kubeadm.go:394] duration metric: took 5m32.487100054s to StartCluster
	I1025 22:57:17.748334  725359 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:17.748440  725359 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:17.749728  725359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:17.750023  725359 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.249 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:57:17.750214  725359 config.go:182] Loaded profile config "default-k8s-diff-port-166447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:17.750280  725359 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 22:57:17.750383  725359 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.750403  725359 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.750412  725359 addons.go:243] addon storage-provisioner should already be in state true
	I1025 22:57:17.750443  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.750455  725359 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.750479  725359 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-166447"
	I1025 22:57:17.750472  725359 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.750509  725359 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.750518  725359 addons.go:243] addon dashboard should already be in state true
	I1025 22:57:17.750548  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.750880  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.750914  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.750968  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.750996  725359 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.751003  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.751012  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.751019  725359 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.751028  725359 addons.go:243] addon metrics-server should already be in state true
	I1025 22:57:17.751043  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.751061  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.751477  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.751531  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.752307  725359 out.go:177] * Verifying Kubernetes components...
	I1025 22:57:17.754336  725359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:17.771639  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I1025 22:57:17.771674  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I1025 22:57:17.771640  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I1025 22:57:17.772091  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.772144  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.772781  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.772806  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.773002  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.773021  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.773179  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.773255  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.773747  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.773792  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.774065  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.774143  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.774156  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.774286  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.774620  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.775315  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.775393  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.777721  725359 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.777747  725359 addons.go:243] addon default-storageclass should already be in state true
	I1025 22:57:17.777782  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.778158  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.778209  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.779137  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1025 22:57:17.779690  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.780249  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.780270  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.780756  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.781301  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.781337  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.795859  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I1025 22:57:17.796354  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I1025 22:57:17.796527  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.796726  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.797032  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.797053  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.797488  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.797567  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.797584  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.797677  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.798041  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.798308  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.799791  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I1025 22:57:17.799971  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.800466  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.800716  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.801196  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.801221  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.801700  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.802363  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.802448  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.802478  725359 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:57:17.802546  725359 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1025 22:57:17.804194  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1025 22:57:17.804511  725359 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:17.804535  725359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:57:17.804557  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.804629  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 22:57:17.804640  725359 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 22:57:17.804657  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.804697  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.805172  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.805189  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.805541  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.805768  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.809358  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.809694  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.810510  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.810544  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.810708  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.810784  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.810929  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.811051  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.811140  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:17.811287  725359 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 22:57:17.811466  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.811495  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.811518  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.811635  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.814016  725359 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 22:57:14.076273  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:14.076902  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:14.076934  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:14.076868  728395 retry.go:31] will retry after 1.185306059s: waiting for domain to come up
	I1025 22:57:15.263741  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:15.264326  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:15.264366  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:15.264273  728395 retry.go:31] will retry after 1.322346565s: waiting for domain to come up
	I1025 22:57:16.588814  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:16.589321  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:16.589347  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:16.589282  728395 retry.go:31] will retry after 1.73855821s: waiting for domain to come up
	I1025 22:57:18.330419  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:18.331024  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:18.331054  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:18.330973  728395 retry.go:31] will retry after 2.069940103s: waiting for domain to come up
	I1025 22:57:17.132197  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:17.146596  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:17.146674  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:17.185560  726389 cri.go:89] found id: ""
	I1025 22:57:17.185593  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.185603  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:17.185610  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:17.185670  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:17.220864  726389 cri.go:89] found id: ""
	I1025 22:57:17.220897  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.220910  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:17.220919  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:17.221004  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:17.260844  726389 cri.go:89] found id: ""
	I1025 22:57:17.260872  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.260880  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:17.260887  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:17.260939  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:17.302800  726389 cri.go:89] found id: ""
	I1025 22:57:17.302833  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.302845  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:17.302853  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:17.302913  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:17.337851  726389 cri.go:89] found id: ""
	I1025 22:57:17.337881  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.337892  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:17.337901  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:17.337959  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:17.374697  726389 cri.go:89] found id: ""
	I1025 22:57:17.374739  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.374752  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:17.374760  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:17.374827  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:17.419883  726389 cri.go:89] found id: ""
	I1025 22:57:17.419913  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.419923  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:17.419929  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:17.419981  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:17.453770  726389 cri.go:89] found id: ""
	I1025 22:57:17.453797  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.453809  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:17.453821  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:17.453835  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:17.467935  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:17.467971  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:17.546221  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:17.546251  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:17.546269  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:17.655338  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:17.655395  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:17.696499  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:17.696531  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:17.815285  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 22:57:17.815304  725359 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 22:57:17.815325  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.821095  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.821105  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.821115  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.821128  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.821146  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.821336  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.821429  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:17.821740  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.821905  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:17.823391  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I1025 22:57:17.823756  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.824397  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.824420  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.824819  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.825001  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.826499  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.826709  725359 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:17.826724  725359 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:57:17.826741  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.829834  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.830223  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.830256  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.830391  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.830555  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.830712  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.830834  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:18.014991  725359 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:57:18.036760  725359 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-166447" to be "Ready" ...
	I1025 22:57:18.078787  725359 node_ready.go:49] node "default-k8s-diff-port-166447" has status "Ready":"True"
	I1025 22:57:18.078820  725359 node_ready.go:38] duration metric: took 42.016031ms for node "default-k8s-diff-port-166447" to be "Ready" ...
	I1025 22:57:18.078834  725359 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:57:18.085830  725359 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:18.122468  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 22:57:18.122502  725359 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 22:57:18.151830  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:18.164388  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:18.181181  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 22:57:18.181212  725359 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 22:57:18.239075  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 22:57:18.239113  725359 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 22:57:18.269994  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 22:57:18.270026  725359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 22:57:18.332398  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 22:57:18.332427  725359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 22:57:18.431935  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 22:57:18.431970  725359 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 22:57:18.435490  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 22:57:18.435518  725359 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 22:57:18.514890  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 22:57:18.514925  725359 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 22:57:18.543084  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:18.543128  725359 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 22:57:18.577174  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:18.620888  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 22:57:18.620921  725359 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 22:57:18.697204  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 22:57:18.697242  725359 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 22:57:18.810445  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:18.810484  725359 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 22:57:18.885504  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:19.260717  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.108837823s)
	I1025 22:57:19.260766  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.096340939s)
	I1025 22:57:19.260787  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.260802  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.260807  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.260863  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.261282  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.261318  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.261344  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.261350  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.261372  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.261385  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.261441  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.261466  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.261484  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.261526  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.261902  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.261916  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.262246  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.263229  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.263251  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.290328  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.290366  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.290838  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.290847  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.290864  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.132386  725359 pod_ready.go:103] pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace has status "Ready":"False"
	I1025 22:57:20.242738  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.665512298s)
	I1025 22:57:20.242808  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.242828  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.243142  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.243200  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:20.243217  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.243225  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.243238  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.243508  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.243530  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.243542  725359 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-166447"
	I1025 22:57:20.984026  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.098465183s)
	I1025 22:57:20.984079  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.984091  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.984421  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.984436  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.984444  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.984451  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.984739  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.984761  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.986558  725359 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-166447 addons enable metrics-server
	
	I1025 22:57:20.987567  725359 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1025 22:57:20.988902  725359 addons.go:510] duration metric: took 3.23862229s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1025 22:57:21.593090  725359 pod_ready.go:93] pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:21.593118  725359 pod_ready.go:82] duration metric: took 3.507254474s for pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:21.593131  725359 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gq5mv" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:21.597786  725359 pod_ready.go:93] pod "coredns-7c65d6cfc9-gq5mv" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:21.597816  725359 pod_ready.go:82] duration metric: took 4.674133ms for pod "coredns-7c65d6cfc9-gq5mv" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:21.597830  725359 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:20.402145  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:20.402661  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:20.402722  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:20.402656  728395 retry.go:31] will retry after 3.412502046s: waiting for domain to come up
	I1025 22:57:23.818716  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:23.819208  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:23.819237  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:23.819161  728395 retry.go:31] will retry after 4.418758048s: waiting for domain to come up
	I1025 22:57:20.249946  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:20.267883  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:20.267964  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:20.317028  726389 cri.go:89] found id: ""
	I1025 22:57:20.317071  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.317083  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:20.317092  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:20.317159  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:20.362449  726389 cri.go:89] found id: ""
	I1025 22:57:20.362481  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.362491  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:20.362497  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:20.362548  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:20.398308  726389 cri.go:89] found id: ""
	I1025 22:57:20.398348  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.398369  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:20.398377  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:20.398450  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:20.446702  726389 cri.go:89] found id: ""
	I1025 22:57:20.446731  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.446740  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:20.446746  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:20.446798  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:20.489776  726389 cri.go:89] found id: ""
	I1025 22:57:20.489809  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.489826  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:20.489833  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:20.489899  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:20.535387  726389 cri.go:89] found id: ""
	I1025 22:57:20.535415  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.535426  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:20.535442  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:20.535507  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:20.573433  726389 cri.go:89] found id: ""
	I1025 22:57:20.573467  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.573479  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:20.573487  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:20.573554  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:20.613584  726389 cri.go:89] found id: ""
	I1025 22:57:20.613619  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.613631  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:20.613643  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:20.613664  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:20.675387  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:20.675426  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:20.691467  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:20.691513  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:20.813943  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:20.813975  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:20.813992  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:20.904974  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:20.905028  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:23.450429  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:23.464096  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:23.464169  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:23.500126  726389 cri.go:89] found id: ""
	I1025 22:57:23.500152  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.500161  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:23.500167  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:23.500220  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:23.534564  726389 cri.go:89] found id: ""
	I1025 22:57:23.534597  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.534608  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:23.534615  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:23.534666  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:23.577493  726389 cri.go:89] found id: ""
	I1025 22:57:23.577529  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.577541  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:23.577551  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:23.577679  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:23.616432  726389 cri.go:89] found id: ""
	I1025 22:57:23.616463  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.616474  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:23.616488  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:23.616553  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:23.655679  726389 cri.go:89] found id: ""
	I1025 22:57:23.655715  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.655727  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:23.655735  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:23.655804  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:23.695528  726389 cri.go:89] found id: ""
	I1025 22:57:23.695558  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.695570  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:23.695578  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:23.695642  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:23.734570  726389 cri.go:89] found id: ""
	I1025 22:57:23.734610  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.734622  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:23.734631  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:23.734703  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:23.774178  726389 cri.go:89] found id: ""
	I1025 22:57:23.774213  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.774225  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:23.774238  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:23.774254  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:23.857347  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:23.857389  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:23.896130  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:23.896167  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:23.948276  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:23.948320  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:23.961809  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:23.961840  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:24.053746  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:23.604335  725359 pod_ready.go:103] pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"False"
	I1025 22:57:26.104577  725359 pod_ready.go:103] pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"False"
	I1025 22:57:26.613548  725359 pod_ready.go:93] pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.613571  725359 pod_ready.go:82] duration metric: took 5.015733422s for pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.613582  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.621883  725359 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.621908  725359 pod_ready.go:82] duration metric: took 8.319327ms for pod "kube-apiserver-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.621919  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.630956  725359 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.630981  725359 pod_ready.go:82] duration metric: took 9.055173ms for pod "kube-controller-manager-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.630994  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zqjjc" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.647393  725359 pod_ready.go:93] pod "kube-proxy-zqjjc" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.647428  725359 pod_ready.go:82] duration metric: took 16.426697ms for pod "kube-proxy-zqjjc" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.647440  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.658038  725359 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.658067  725359 pod_ready.go:82] duration metric: took 10.617453ms for pod "kube-scheduler-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.658077  725359 pod_ready.go:39] duration metric: took 8.57922838s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:57:26.658096  725359 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:57:26.658162  725359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:26.705852  725359 api_server.go:72] duration metric: took 8.955782657s to wait for apiserver process to appear ...
	I1025 22:57:26.705882  725359 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:57:26.705909  725359 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8444/healthz ...
	I1025 22:57:26.712359  725359 api_server.go:279] https://192.168.61.249:8444/healthz returned 200:
	ok
	I1025 22:57:26.713354  725359 api_server.go:141] control plane version: v1.31.1
	I1025 22:57:26.713378  725359 api_server.go:131] duration metric: took 7.487484ms to wait for apiserver health ...
	I1025 22:57:26.713397  725359 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:57:26.809108  725359 system_pods.go:59] 9 kube-system pods found
	I1025 22:57:26.809156  725359 system_pods.go:61] "coredns-7c65d6cfc9-6fssw" [06188cce-77d0-4fdd-9769-98498d4912c2] Running
	I1025 22:57:26.809165  725359 system_pods.go:61] "coredns-7c65d6cfc9-gq5mv" [bd29ba17-303d-47b6-a34b-2045fe2b965d] Running
	I1025 22:57:26.809177  725359 system_pods.go:61] "etcd-default-k8s-diff-port-166447" [e2c29643-05fb-4424-81ee-cf258c0b7e95] Running
	I1025 22:57:26.809184  725359 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-166447" [1ffa7ab6-e0ca-484b-bc1a-eba3741e9e72] Running
	I1025 22:57:26.809191  725359 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-166447" [7f2e2ca6-75ff-43a5-9787-8cb35d659c95] Running
	I1025 22:57:26.809196  725359 system_pods.go:61] "kube-proxy-zqjjc" [144928e5-1a70-4f28-8c34-c5f8c11bf2d7] Running
	I1025 22:57:26.809203  725359 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-166447" [e5852078-0ead-45bb-9197-e1d06d125a43] Running
	I1025 22:57:26.809216  725359 system_pods.go:61] "metrics-server-6867b74b74-nvxph" [e0d75616-8015-4d30-b209-f68ff502d6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:26.809226  725359 system_pods.go:61] "storage-provisioner" [855c29fa-deba-4342-90eb-19c66fd7905f] Running
	I1025 22:57:26.809243  725359 system_pods.go:74] duration metric: took 95.838638ms to wait for pod list to return data ...
	I1025 22:57:26.809259  725359 default_sa.go:34] waiting for default service account to be created ...
	I1025 22:57:27.003062  725359 default_sa.go:45] found service account: "default"
	I1025 22:57:27.003103  725359 default_sa.go:55] duration metric: took 193.830229ms for default service account to be created ...
	I1025 22:57:27.003120  725359 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 22:57:27.206396  725359 system_pods.go:86] 9 kube-system pods found
	I1025 22:57:27.206438  725359 system_pods.go:89] "coredns-7c65d6cfc9-6fssw" [06188cce-77d0-4fdd-9769-98498d4912c2] Running
	I1025 22:57:27.206446  725359 system_pods.go:89] "coredns-7c65d6cfc9-gq5mv" [bd29ba17-303d-47b6-a34b-2045fe2b965d] Running
	I1025 22:57:27.206452  725359 system_pods.go:89] "etcd-default-k8s-diff-port-166447" [e2c29643-05fb-4424-81ee-cf258c0b7e95] Running
	I1025 22:57:27.206457  725359 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-166447" [1ffa7ab6-e0ca-484b-bc1a-eba3741e9e72] Running
	I1025 22:57:27.206463  725359 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-166447" [7f2e2ca6-75ff-43a5-9787-8cb35d659c95] Running
	I1025 22:57:27.206468  725359 system_pods.go:89] "kube-proxy-zqjjc" [144928e5-1a70-4f28-8c34-c5f8c11bf2d7] Running
	I1025 22:57:27.206473  725359 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-166447" [e5852078-0ead-45bb-9197-e1d06d125a43] Running
	I1025 22:57:27.206485  725359 system_pods.go:89] "metrics-server-6867b74b74-nvxph" [e0d75616-8015-4d30-b209-f68ff502d6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:27.206491  725359 system_pods.go:89] "storage-provisioner" [855c29fa-deba-4342-90eb-19c66fd7905f] Running
	I1025 22:57:27.206500  725359 system_pods.go:126] duration metric: took 203.373296ms to wait for k8s-apps to be running ...
	I1025 22:57:27.206511  725359 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 22:57:27.206568  725359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:57:27.236359  725359 system_svc.go:56] duration metric: took 29.835602ms WaitForService to wait for kubelet
	I1025 22:57:27.236401  725359 kubeadm.go:582] duration metric: took 9.486336184s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 22:57:27.236428  725359 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:57:27.404633  725359 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:57:27.404660  725359 node_conditions.go:123] node cpu capacity is 2
	I1025 22:57:27.404674  725359 node_conditions.go:105] duration metric: took 168.23879ms to run NodePressure ...
	I1025 22:57:27.404686  725359 start.go:241] waiting for startup goroutines ...
	I1025 22:57:27.404693  725359 start.go:246] waiting for cluster config update ...
	I1025 22:57:27.404704  725359 start.go:255] writing updated cluster config ...
	I1025 22:57:27.404950  725359 ssh_runner.go:195] Run: rm -f paused
	I1025 22:57:27.471713  725359 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 22:57:27.473904  725359 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-166447" cluster and "default" namespace by default
	I1025 22:57:28.242024  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.242494  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has current primary IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.242523  728361 main.go:141] libmachine: (newest-cni-357495) found domain IP: 192.168.72.113
	I1025 22:57:28.242535  728361 main.go:141] libmachine: (newest-cni-357495) reserving static IP address...
	I1025 22:57:28.242970  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "newest-cni-357495", mac: "52:54:00:fb:c0:76", ip: "192.168.72.113"} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.243000  728361 main.go:141] libmachine: (newest-cni-357495) DBG | skip adding static IP to network mk-newest-cni-357495 - found existing host DHCP lease matching {name: "newest-cni-357495", mac: "52:54:00:fb:c0:76", ip: "192.168.72.113"}
	I1025 22:57:28.243013  728361 main.go:141] libmachine: (newest-cni-357495) reserved static IP address 192.168.72.113 for domain newest-cni-357495
	I1025 22:57:28.243028  728361 main.go:141] libmachine: (newest-cni-357495) waiting for SSH...
	I1025 22:57:28.243042  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Getting to WaitForSSH function...
	I1025 22:57:28.245300  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.245651  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.245680  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.245811  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Using SSH client type: external
	I1025 22:57:28.245835  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Using SSH private key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa (-rw-------)
	I1025 22:57:28.245865  728361 main.go:141] libmachine: (newest-cni-357495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 22:57:28.245876  728361 main.go:141] libmachine: (newest-cni-357495) DBG | About to run SSH command:
	I1025 22:57:28.245886  728361 main.go:141] libmachine: (newest-cni-357495) DBG | exit 0
	I1025 22:57:28.377143  728361 main.go:141] libmachine: (newest-cni-357495) DBG | SSH cmd err, output: <nil>: 
	I1025 22:57:28.377542  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetConfigRaw
	I1025 22:57:28.378182  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:28.380998  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.381388  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.381422  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.381661  728361 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/config.json ...
	I1025 22:57:28.382355  728361 machine.go:93] provisionDockerMachine start ...
	I1025 22:57:28.382383  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:28.382637  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.384883  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.385241  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.385266  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.385388  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.385550  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.385705  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.385873  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.386055  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:28.386295  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:28.386309  728361 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 22:57:28.489731  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 22:57:28.489766  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetMachineName
	I1025 22:57:28.490029  728361 buildroot.go:166] provisioning hostname "newest-cni-357495"
	I1025 22:57:28.490072  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetMachineName
	I1025 22:57:28.490223  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.493372  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.493804  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.493835  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.493978  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.494135  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.494278  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.494406  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.494585  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:28.494823  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:28.494850  728361 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-357495 && echo "newest-cni-357495" | sudo tee /etc/hostname
	I1025 22:57:28.612233  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-357495
	
	I1025 22:57:28.612271  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.615209  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.615542  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.615568  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.615802  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.616013  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.616211  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.616377  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.616605  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:28.616836  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:28.616860  728361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-357495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-357495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-357495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 22:57:28.731112  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:57:28.731149  728361 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19758-661979/.minikube CaCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19758-661979/.minikube}
	I1025 22:57:28.731175  728361 buildroot.go:174] setting up certificates
	I1025 22:57:28.731189  728361 provision.go:84] configureAuth start
	I1025 22:57:28.731202  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetMachineName
	I1025 22:57:28.731508  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:28.734722  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.735105  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.735159  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.735349  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.737700  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.738025  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.738059  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.738280  728361 provision.go:143] copyHostCerts
	I1025 22:57:28.738356  728361 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem, removing ...
	I1025 22:57:28.738370  728361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem
	I1025 22:57:28.738437  728361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem (1675 bytes)
	I1025 22:57:28.738544  728361 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem, removing ...
	I1025 22:57:28.738551  728361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem
	I1025 22:57:28.738576  728361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem (1082 bytes)
	I1025 22:57:28.738644  728361 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem, removing ...
	I1025 22:57:28.738652  728361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem
	I1025 22:57:28.738673  728361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem (1123 bytes)
	I1025 22:57:28.738739  728361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem org=jenkins.newest-cni-357495 san=[127.0.0.1 192.168.72.113 localhost minikube newest-cni-357495]
	I1025 22:57:28.833704  728361 provision.go:177] copyRemoteCerts
	I1025 22:57:28.833762  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 22:57:28.833797  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.836780  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.837177  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.837208  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.837372  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.837573  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.837734  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.837863  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:28.922411  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 22:57:28.948328  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 22:57:28.976524  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 22:57:29.005619  728361 provision.go:87] duration metric: took 274.411907ms to configureAuth
	I1025 22:57:29.005654  728361 buildroot.go:189] setting minikube options for container-runtime
	I1025 22:57:29.005887  728361 config.go:182] Loaded profile config "newest-cni-357495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:29.005985  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:26.553979  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:26.567886  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:26.567964  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:26.603338  726389 cri.go:89] found id: ""
	I1025 22:57:26.603376  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.603389  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:26.603403  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:26.603475  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:26.637525  726389 cri.go:89] found id: ""
	I1025 22:57:26.637548  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.637556  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:26.637562  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:26.637609  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:26.672117  726389 cri.go:89] found id: ""
	I1025 22:57:26.672150  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.672159  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:26.672166  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:26.672230  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:26.705637  726389 cri.go:89] found id: ""
	I1025 22:57:26.705669  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.705681  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:26.705689  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:26.705762  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:26.759040  726389 cri.go:89] found id: ""
	I1025 22:57:26.759070  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.759084  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:26.759092  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:26.759161  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:26.811512  726389 cri.go:89] found id: ""
	I1025 22:57:26.811537  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.811547  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:26.811555  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:26.811641  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:26.851215  726389 cri.go:89] found id: ""
	I1025 22:57:26.851245  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.851256  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:26.851264  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:26.851330  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:26.884460  726389 cri.go:89] found id: ""
	I1025 22:57:26.884495  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.884508  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:26.884520  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:26.884535  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:26.960048  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:26.960092  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:26.998588  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:26.998620  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:27.061646  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:27.061692  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:27.078350  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:27.078385  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:27.150478  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:29.009371  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.009852  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.009887  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.010056  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.010269  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.010451  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.010622  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.010818  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:29.010989  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:29.011004  728361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 22:57:29.235601  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 22:57:29.235655  728361 machine.go:96] duration metric: took 853.280404ms to provisionDockerMachine
	I1025 22:57:29.235672  728361 start.go:293] postStartSetup for "newest-cni-357495" (driver="kvm2")
	I1025 22:57:29.235694  728361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 22:57:29.235722  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.236076  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 22:57:29.236116  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.239049  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.239449  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.239482  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.239668  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.239889  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.240099  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.240319  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:29.327450  728361 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 22:57:29.331888  728361 info.go:137] Remote host: Buildroot 2023.02.9
	I1025 22:57:29.331921  728361 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/addons for local assets ...
	I1025 22:57:29.331987  728361 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/files for local assets ...
	I1025 22:57:29.332065  728361 filesync.go:149] local asset: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem -> 6691772.pem in /etc/ssl/certs
	I1025 22:57:29.332195  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 22:57:29.341892  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:57:29.367038  728361 start.go:296] duration metric: took 131.349254ms for postStartSetup
	I1025 22:57:29.367084  728361 fix.go:56] duration metric: took 20.2556649s for fixHost
	I1025 22:57:29.367106  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.369924  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.370255  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.370285  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.370425  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.370590  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.370745  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.370950  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.371124  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:29.371304  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:29.371313  728361 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 22:57:29.474861  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729897049.432427295
	
	I1025 22:57:29.474889  728361 fix.go:216] guest clock: 1729897049.432427295
	I1025 22:57:29.474899  728361 fix.go:229] Guest: 2024-10-25 22:57:29.432427295 +0000 UTC Remote: 2024-10-25 22:57:29.367088624 +0000 UTC m=+20.400142994 (delta=65.338671ms)
	I1025 22:57:29.474946  728361 fix.go:200] guest clock delta is within tolerance: 65.338671ms
	I1025 22:57:29.474960  728361 start.go:83] releasing machines lock for "newest-cni-357495", held for 20.363562153s
	I1025 22:57:29.474986  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.475248  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:29.478056  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.478406  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.478437  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.478628  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.479132  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.479319  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.479468  728361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 22:57:29.479506  728361 ssh_runner.go:195] Run: cat /version.json
	I1025 22:57:29.479527  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.479536  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.482531  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.482637  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.483074  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.483100  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.483131  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.483191  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.483471  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.483481  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.483652  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.483671  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.483931  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.483955  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.484103  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:29.484143  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:29.582367  728361 ssh_runner.go:195] Run: systemctl --version
	I1025 22:57:29.590693  728361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 22:57:29.745303  728361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 22:57:29.754423  728361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 22:57:29.754501  728361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 22:57:29.775617  728361 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 22:57:29.775648  728361 start.go:495] detecting cgroup driver to use...
	I1025 22:57:29.775747  728361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 22:57:29.799558  728361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 22:57:29.818705  728361 docker.go:217] disabling cri-docker service (if available) ...
	I1025 22:57:29.818806  728361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 22:57:29.833563  728361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 22:57:29.853630  728361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 22:57:29.983430  728361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 22:57:30.197267  728361 docker.go:233] disabling docker service ...
	I1025 22:57:30.197347  728361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 22:57:30.216012  728361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 22:57:30.230378  728361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 22:57:30.360555  728361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 22:57:30.484679  728361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 22:57:30.503208  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:57:30.523720  728361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1025 22:57:30.523795  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.535314  728361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 22:57:30.535383  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.546715  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.557826  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.569760  728361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 22:57:30.582722  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.593853  728361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.611448  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.622915  728361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 22:57:30.633073  728361 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 22:57:30.633147  728361 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 22:57:30.647230  728361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 22:57:30.657299  728361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:30.768765  728361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 22:57:30.854500  728361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 22:57:30.854590  728361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 22:57:30.859405  728361 start.go:563] Will wait 60s for crictl version
	I1025 22:57:30.859473  728361 ssh_runner.go:195] Run: which crictl
	I1025 22:57:30.863420  728361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 22:57:30.908862  728361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 22:57:30.908976  728361 ssh_runner.go:195] Run: crio --version
	I1025 22:57:30.939582  728361 ssh_runner.go:195] Run: crio --version
	I1025 22:57:30.978153  728361 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1025 22:57:30.979430  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:30.982243  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:30.982608  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:30.982641  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:30.982834  728361 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1025 22:57:30.988035  728361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:57:31.004301  728361 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 22:57:31.005441  728361 kubeadm.go:883] updating cluster {Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 22:57:31.005579  728361 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 22:57:31.005635  728361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:57:31.049853  728361 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1025 22:57:31.049928  728361 ssh_runner.go:195] Run: which lz4
	I1025 22:57:31.054174  728361 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 22:57:31.058473  728361 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 22:57:31.058505  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1025 22:57:32.497532  728361 crio.go:462] duration metric: took 1.44340372s to copy over tarball
	I1025 22:57:32.497637  728361 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 22:57:29.650805  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:29.664484  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:29.664563  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:29.706919  726389 cri.go:89] found id: ""
	I1025 22:57:29.706950  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.706961  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:29.706968  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:29.707032  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:29.748272  726389 cri.go:89] found id: ""
	I1025 22:57:29.748301  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.748313  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:29.748322  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:29.748383  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:29.783239  726389 cri.go:89] found id: ""
	I1025 22:57:29.783281  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.783303  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:29.783315  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:29.783381  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:29.828942  726389 cri.go:89] found id: ""
	I1025 22:57:29.829005  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.829021  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:29.829031  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:29.829112  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:29.874831  726389 cri.go:89] found id: ""
	I1025 22:57:29.874864  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.874876  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:29.874885  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:29.874950  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:29.920380  726389 cri.go:89] found id: ""
	I1025 22:57:29.920411  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.920422  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:29.920430  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:29.920495  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:29.964594  726389 cri.go:89] found id: ""
	I1025 22:57:29.964624  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.964636  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:29.964643  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:29.964713  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:30.000416  726389 cri.go:89] found id: ""
	I1025 22:57:30.000449  726389 logs.go:282] 0 containers: []
	W1025 22:57:30.000461  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:30.000475  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:30.000500  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:30.073028  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:30.073055  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:30.073072  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:30.158430  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:30.158481  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:30.212493  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:30.212530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:30.289552  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:30.289606  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:32.808776  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:32.822039  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:32.822111  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:32.857007  726389 cri.go:89] found id: ""
	I1025 22:57:32.857042  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.857054  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:32.857063  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:32.857122  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:32.902015  726389 cri.go:89] found id: ""
	I1025 22:57:32.902045  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.902057  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:32.902066  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:32.902146  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:32.962252  726389 cri.go:89] found id: ""
	I1025 22:57:32.962287  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.962299  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:32.962307  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:32.962381  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:33.010092  726389 cri.go:89] found id: ""
	I1025 22:57:33.010129  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.010140  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:33.010149  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:33.010219  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:33.057453  726389 cri.go:89] found id: ""
	I1025 22:57:33.057482  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.057492  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:33.057499  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:33.057618  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:33.096991  726389 cri.go:89] found id: ""
	I1025 22:57:33.097024  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.097035  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:33.097042  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:33.097092  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:33.130710  726389 cri.go:89] found id: ""
	I1025 22:57:33.130740  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.130751  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:33.130759  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:33.130820  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:33.169440  726389 cri.go:89] found id: ""
	I1025 22:57:33.169479  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.169491  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:33.169505  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:33.169520  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:33.249558  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:33.249586  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:33.249603  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:33.364568  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:33.364613  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:33.415233  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:33.415264  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:33.472943  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:33.473014  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:34.612317  728361 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11464276s)
	I1025 22:57:34.612352  728361 crio.go:469] duration metric: took 2.114771262s to extract the tarball
	I1025 22:57:34.612363  728361 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 22:57:34.651878  728361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:57:34.694439  728361 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 22:57:34.694463  728361 cache_images.go:84] Images are preloaded, skipping loading
	I1025 22:57:34.694472  728361 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.31.1 crio true true} ...
	I1025 22:57:34.694604  728361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-357495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 22:57:34.694677  728361 ssh_runner.go:195] Run: crio config
	I1025 22:57:34.748152  728361 cni.go:84] Creating CNI manager for ""
	I1025 22:57:34.748178  728361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:34.748189  728361 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1025 22:57:34.748215  728361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-357495 NodeName:newest-cni-357495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 22:57:34.748372  728361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-357495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.113"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 22:57:34.748437  728361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1025 22:57:34.760143  728361 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 22:57:34.760202  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 22:57:34.771582  728361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1025 22:57:34.787944  728361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 22:57:34.804113  728361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1025 22:57:34.820688  728361 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I1025 22:57:34.824565  728361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:57:34.837134  728361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:34.952711  728361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:57:34.970911  728361 certs.go:68] Setting up /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495 for IP: 192.168.72.113
	I1025 22:57:34.970937  728361 certs.go:194] generating shared ca certs ...
	I1025 22:57:34.970959  728361 certs.go:226] acquiring lock for ca certs: {Name:mkb3c0b02dcf77d22d8e250175d24731a46db4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:34.971160  728361 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key
	I1025 22:57:34.971239  728361 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key
	I1025 22:57:34.971254  728361 certs.go:256] generating profile certs ...
	I1025 22:57:34.971378  728361 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/client.key
	I1025 22:57:34.971475  728361 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/apiserver.key.03300bc5
	I1025 22:57:34.971536  728361 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/proxy-client.key
	I1025 22:57:34.971687  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem (1338 bytes)
	W1025 22:57:34.971735  728361 certs.go:480] ignoring /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177_empty.pem, impossibly tiny 0 bytes
	I1025 22:57:34.971748  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 22:57:34.971781  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem (1082 bytes)
	I1025 22:57:34.971814  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem (1123 bytes)
	I1025 22:57:34.971845  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem (1675 bytes)
	I1025 22:57:34.971898  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:57:34.972920  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 22:57:35.035802  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 22:57:35.066849  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 22:57:35.095746  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 22:57:35.122667  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 22:57:35.152086  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 22:57:35.178215  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 22:57:35.201152  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 22:57:35.225276  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 22:57:35.247950  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem --> /usr/share/ca-certificates/669177.pem (1338 bytes)
	I1025 22:57:35.273680  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /usr/share/ca-certificates/6691772.pem (1708 bytes)
	I1025 22:57:35.297790  728361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 22:57:35.314273  728361 ssh_runner.go:195] Run: openssl version
	I1025 22:57:35.319977  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 22:57:35.332531  728361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:57:35.337386  728361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:57:35.337435  728361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:57:35.343239  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 22:57:35.354526  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669177.pem && ln -fs /usr/share/ca-certificates/669177.pem /etc/ssl/certs/669177.pem"
	I1025 22:57:35.364927  728361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669177.pem
	I1025 22:57:35.369254  728361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:46 /usr/share/ca-certificates/669177.pem
	I1025 22:57:35.369307  728361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669177.pem
	I1025 22:57:35.375175  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669177.pem /etc/ssl/certs/51391683.0"
	I1025 22:57:35.386699  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6691772.pem && ln -fs /usr/share/ca-certificates/6691772.pem /etc/ssl/certs/6691772.pem"
	I1025 22:57:35.397181  728361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6691772.pem
	I1025 22:57:35.401747  728361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:46 /usr/share/ca-certificates/6691772.pem
	I1025 22:57:35.401797  728361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6691772.pem
	I1025 22:57:35.407254  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6691772.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 22:57:35.417716  728361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 22:57:35.422134  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 22:57:35.428825  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 22:57:35.435416  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 22:57:35.441327  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 22:57:35.446978  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 22:57:35.452887  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 22:57:35.458800  728361 kubeadm.go:392] StartCluster: {Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:57:35.458907  728361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 22:57:35.458975  728361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:57:35.508107  728361 cri.go:89] found id: ""
	I1025 22:57:35.508190  728361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 22:57:35.518730  728361 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 22:57:35.518756  728361 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 22:57:35.518812  728361 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 22:57:35.528709  728361 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:57:35.529470  728361 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-357495" does not appear in /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:35.529808  728361 kubeconfig.go:62] /home/jenkins/minikube-integration/19758-661979/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-357495" cluster setting kubeconfig missing "newest-cni-357495" context setting]
	I1025 22:57:35.530280  728361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:35.531821  728361 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 22:57:35.541383  728361 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.113
	I1025 22:57:35.541408  728361 kubeadm.go:1160] stopping kube-system containers ...
	I1025 22:57:35.541426  728361 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 22:57:35.541475  728361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:57:35.581588  728361 cri.go:89] found id: ""
	I1025 22:57:35.581670  728361 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 22:57:35.597329  728361 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:57:35.606992  728361 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:57:35.607032  728361 kubeadm.go:157] found existing configuration files:
	
	I1025 22:57:35.607078  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:57:35.616052  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:57:35.616100  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:57:35.625202  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:57:35.634016  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:57:35.634060  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:57:35.643656  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:57:35.654009  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:57:35.654059  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:57:35.664119  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:57:35.673468  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:57:35.673524  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:57:35.683499  728361 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:57:35.693207  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:35.800242  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:36.661671  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:36.883048  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:36.950556  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:37.060335  728361 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:57:37.060456  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:37.560722  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:38.061291  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:38.560646  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:35.989111  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:36.002822  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:36.002901  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:36.042325  726389 cri.go:89] found id: ""
	I1025 22:57:36.042362  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.042373  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:36.042381  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:36.042446  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:36.083924  726389 cri.go:89] found id: ""
	I1025 22:57:36.083957  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.083968  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:36.083976  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:36.084047  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:36.117475  726389 cri.go:89] found id: ""
	I1025 22:57:36.117511  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.117523  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:36.117531  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:36.117592  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:36.151851  726389 cri.go:89] found id: ""
	I1025 22:57:36.151888  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.151901  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:36.151909  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:36.151975  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:36.188798  726389 cri.go:89] found id: ""
	I1025 22:57:36.188825  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.188837  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:36.188845  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:36.188905  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:36.222491  726389 cri.go:89] found id: ""
	I1025 22:57:36.222532  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.222544  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:36.222555  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:36.222621  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:36.257481  726389 cri.go:89] found id: ""
	I1025 22:57:36.257511  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.257520  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:36.257527  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:36.257580  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:36.291774  726389 cri.go:89] found id: ""
	I1025 22:57:36.291805  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.291817  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:36.291829  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:36.291845  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:36.341240  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:36.341288  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:36.355280  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:36.355312  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:36.420727  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:36.420756  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:36.420770  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:36.496896  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:36.496943  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:39.035530  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:39.053640  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:39.053721  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:39.095892  726389 cri.go:89] found id: ""
	I1025 22:57:39.095924  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.095936  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:39.095945  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:39.096010  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:39.135571  726389 cri.go:89] found id: ""
	I1025 22:57:39.135603  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.135614  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:39.135621  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:39.135680  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:39.174481  726389 cri.go:89] found id: ""
	I1025 22:57:39.174517  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.174530  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:39.174539  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:39.174597  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:39.214453  726389 cri.go:89] found id: ""
	I1025 22:57:39.214488  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.214505  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:39.214512  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:39.214565  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:39.251084  726389 cri.go:89] found id: ""
	I1025 22:57:39.251111  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.251119  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:39.251126  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:39.251186  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:39.292067  726389 cri.go:89] found id: ""
	I1025 22:57:39.292098  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.292108  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:39.292117  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:39.292183  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:39.331918  726389 cri.go:89] found id: ""
	I1025 22:57:39.331953  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.331964  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:39.331972  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:39.332032  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:39.366300  726389 cri.go:89] found id: ""
	I1025 22:57:39.366334  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.366346  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:39.366358  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:39.366373  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:39.451297  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:39.451344  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:39.492655  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:39.492695  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:39.551959  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:39.552004  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:39.565900  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:39.565934  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:39.637894  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:39.061158  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:39.083761  728361 api_server.go:72] duration metric: took 2.023424888s to wait for apiserver process to appear ...
	I1025 22:57:39.083795  728361 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:57:39.083833  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:39.084432  728361 api_server.go:269] stopped: https://192.168.72.113:8443/healthz: Get "https://192.168.72.113:8443/healthz": dial tcp 192.168.72.113:8443: connect: connection refused
	I1025 22:57:39.584481  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:41.830058  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 22:57:41.830086  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 22:57:41.830102  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:41.851621  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 22:57:41.851664  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 22:57:42.083965  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:42.098809  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:57:42.098843  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:57:42.583931  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:42.595538  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:57:42.595610  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:57:43.084096  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:43.099317  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:57:43.099347  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:57:43.583916  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:43.588837  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I1025 22:57:43.595393  728361 api_server.go:141] control plane version: v1.31.1
	I1025 22:57:43.595419  728361 api_server.go:131] duration metric: took 4.511617345s to wait for apiserver health ...
	I1025 22:57:43.595430  728361 cni.go:84] Creating CNI manager for ""
	I1025 22:57:43.595436  728361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:43.597362  728361 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 22:57:43.598677  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 22:57:43.611172  728361 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 22:57:43.628657  728361 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:57:43.639416  728361 system_pods.go:59] 8 kube-system pods found
	I1025 22:57:43.639446  728361 system_pods.go:61] "coredns-7c65d6cfc9-gdn8b" [d930c360-1ecf-4a30-be3c-f3159e9e3c54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:57:43.639454  728361 system_pods.go:61] "etcd-newest-cni-357495" [d1e1ea54-c661-46f4-90ed-a8681b837c68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 22:57:43.639466  728361 system_pods.go:61] "kube-apiserver-newest-cni-357495" [e7bbb737-5813-4e39-bf0c-e51250663d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:57:43.639477  728361 system_pods.go:61] "kube-controller-manager-newest-cni-357495" [486c74ca-7ca3-4816-ab43-890d29b4face] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:57:43.639487  728361 system_pods.go:61] "kube-proxy-tmpdb" [406e84ac-bc0c-4eda-a0de-1417d866649f] Running
	I1025 22:57:43.639495  728361 system_pods.go:61] "kube-scheduler-newest-cni-357495" [50c6e5ea-ee14-477e-a94b-8284615777bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:57:43.639505  728361 system_pods.go:61] "metrics-server-6867b74b74-w9dvz" [fee8b8bd-7da6-4b8b-9804-67888b1fc868] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:43.639512  728361 system_pods.go:61] "storage-provisioner" [35061304-9e5a-4f66-875c-103f43be807a] Running
	I1025 22:57:43.639518  728361 system_pods.go:74] duration metric: took 10.839818ms to wait for pod list to return data ...
	I1025 22:57:43.639528  728361 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:57:43.646484  728361 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:57:43.646509  728361 node_conditions.go:123] node cpu capacity is 2
	I1025 22:57:43.646520  728361 node_conditions.go:105] duration metric: took 6.988285ms to run NodePressure ...
	I1025 22:57:43.646539  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:43.915625  728361 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:57:43.934000  728361 ops.go:34] apiserver oom_adj: -16
	I1025 22:57:43.934020  728361 kubeadm.go:597] duration metric: took 8.415258105s to restartPrimaryControlPlane
	I1025 22:57:43.934029  728361 kubeadm.go:394] duration metric: took 8.475239856s to StartCluster
	I1025 22:57:43.934049  728361 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:43.934116  728361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:43.935164  728361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:43.935405  728361 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:57:43.935533  728361 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 22:57:43.935636  728361 config.go:182] Loaded profile config "newest-cni-357495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:43.935668  728361 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-357495"
	I1025 22:57:43.935696  728361 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-357495"
	W1025 22:57:43.935713  728361 addons.go:243] addon storage-provisioner should already be in state true
	I1025 22:57:43.935727  728361 addons.go:69] Setting metrics-server=true in profile "newest-cni-357495"
	I1025 22:57:43.935749  728361 addons.go:234] Setting addon metrics-server=true in "newest-cni-357495"
	I1025 22:57:43.935753  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	W1025 22:57:43.935763  728361 addons.go:243] addon metrics-server should already be in state true
	I1025 22:57:43.935818  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	I1025 22:57:43.936205  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.936245  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.936283  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.935703  728361 addons.go:69] Setting default-storageclass=true in profile "newest-cni-357495"
	I1025 22:57:43.936320  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.936321  728361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-357495"
	I1025 22:57:43.935713  728361 addons.go:69] Setting dashboard=true in profile "newest-cni-357495"
	I1025 22:57:43.936591  728361 addons.go:234] Setting addon dashboard=true in "newest-cni-357495"
	W1025 22:57:43.936602  728361 addons.go:243] addon dashboard should already be in state true
	I1025 22:57:43.936637  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	I1025 22:57:43.936834  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.936873  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.937009  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.937048  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.937659  728361 out.go:177] * Verifying Kubernetes components...
	I1025 22:57:43.939144  728361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:43.955960  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I1025 22:57:43.956461  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.956979  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.957007  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.957063  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40771
	I1025 22:57:43.957440  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.957472  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.957898  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.957919  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.958078  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.958127  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.958280  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.958921  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.958970  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.960741  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35805
	I1025 22:57:43.961123  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.961708  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.961724  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.962094  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.962267  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:43.965281  728361 addons.go:234] Setting addon default-storageclass=true in "newest-cni-357495"
	W1025 22:57:43.965301  728361 addons.go:243] addon default-storageclass should already be in state true
	I1025 22:57:43.965333  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	I1025 22:57:43.965612  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.965651  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.967851  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I1025 22:57:43.968252  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.968859  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.968877  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.969297  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.969895  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.969938  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.978224  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37967
	I1025 22:57:43.980247  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I1025 22:57:43.991129  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1025 22:57:43.997786  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.997926  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.998540  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.998565  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.998646  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.998705  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.998729  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.998995  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.999070  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.999305  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:43.999365  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:43.999543  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.999565  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.999954  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:44.000573  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:44.000731  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:44.001562  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.002141  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.003847  728361 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1025 22:57:44.005301  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 22:57:44.005326  728361 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 22:57:44.005353  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.008444  728361 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:57:44.009433  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.009938  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.009962  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.010211  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.010419  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.010565  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.010672  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.014136  728361 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:44.014160  728361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:57:44.014183  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.017633  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.018066  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.018084  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.018360  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.018538  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.018671  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.018843  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.024748  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33567
	I1025 22:57:44.025455  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:44.025952  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:44.025974  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:44.027949  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:44.028345  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:44.030416  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.030592  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
	I1025 22:57:44.030623  728361 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:44.030636  728361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:57:44.030653  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.031671  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:44.032355  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:44.032380  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:44.033013  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:44.033268  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:44.034055  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.034580  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.034604  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.034914  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.035097  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.035108  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.035257  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.035424  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.037146  728361 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 22:57:44.038544  728361 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 22:57:42.138727  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:42.152525  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:42.152616  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:42.190900  726389 cri.go:89] found id: ""
	I1025 22:57:42.190935  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.190947  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:42.190955  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:42.191043  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:42.237668  726389 cri.go:89] found id: ""
	I1025 22:57:42.237698  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.237711  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:42.237720  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:42.237781  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:42.289049  726389 cri.go:89] found id: ""
	I1025 22:57:42.289077  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.289087  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:42.289096  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:42.289155  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:42.334276  726389 cri.go:89] found id: ""
	I1025 22:57:42.334306  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.334318  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:42.334327  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:42.334385  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:42.379295  726389 cri.go:89] found id: ""
	I1025 22:57:42.379317  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.379325  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:42.379331  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:42.379375  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:42.416452  726389 cri.go:89] found id: ""
	I1025 22:57:42.416484  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.416496  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:42.416504  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:42.416563  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:42.455290  726389 cri.go:89] found id: ""
	I1025 22:57:42.455324  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.455336  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:42.455352  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:42.455421  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:42.493367  726389 cri.go:89] found id: ""
	I1025 22:57:42.493396  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.493413  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:42.493426  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:42.493444  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:42.511673  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:42.511724  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:42.589951  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:42.589976  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:42.589994  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:42.697460  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:42.697498  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:42.757645  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:42.757672  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:44.039861  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 22:57:44.039876  728361 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 22:57:44.039902  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.043936  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.044280  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.044300  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.044646  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.044847  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.045047  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.045212  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.214968  728361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:57:44.230045  728361 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:57:44.230142  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:44.256130  728361 api_server.go:72] duration metric: took 320.677383ms to wait for apiserver process to appear ...
	I1025 22:57:44.256168  728361 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:57:44.256195  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:44.261782  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I1025 22:57:44.262769  728361 api_server.go:141] control plane version: v1.31.1
	I1025 22:57:44.262792  728361 api_server.go:131] duration metric: took 6.616839ms to wait for apiserver health ...
	I1025 22:57:44.262808  728361 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:57:44.268736  728361 system_pods.go:59] 8 kube-system pods found
	I1025 22:57:44.268771  728361 system_pods.go:61] "coredns-7c65d6cfc9-gdn8b" [d930c360-1ecf-4a30-be3c-f3159e9e3c54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:57:44.268782  728361 system_pods.go:61] "etcd-newest-cni-357495" [d1e1ea54-c661-46f4-90ed-a8681b837c68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 22:57:44.268794  728361 system_pods.go:61] "kube-apiserver-newest-cni-357495" [e7bbb737-5813-4e39-bf0c-e51250663d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:57:44.268802  728361 system_pods.go:61] "kube-controller-manager-newest-cni-357495" [486c74ca-7ca3-4816-ab43-890d29b4face] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:57:44.268811  728361 system_pods.go:61] "kube-proxy-tmpdb" [406e84ac-bc0c-4eda-a0de-1417d866649f] Running
	I1025 22:57:44.268824  728361 system_pods.go:61] "kube-scheduler-newest-cni-357495" [50c6e5ea-ee14-477e-a94b-8284615777bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:57:44.268835  728361 system_pods.go:61] "metrics-server-6867b74b74-w9dvz" [fee8b8bd-7da6-4b8b-9804-67888b1fc868] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:44.268844  728361 system_pods.go:61] "storage-provisioner" [35061304-9e5a-4f66-875c-103f43be807a] Running
	I1025 22:57:44.268853  728361 system_pods.go:74] duration metric: took 6.033238ms to wait for pod list to return data ...
	I1025 22:57:44.268865  728361 default_sa.go:34] waiting for default service account to be created ...
	I1025 22:57:44.274413  728361 default_sa.go:45] found service account: "default"
	I1025 22:57:44.274435  728361 default_sa.go:55] duration metric: took 5.560777ms for default service account to be created ...
	I1025 22:57:44.274448  728361 kubeadm.go:582] duration metric: took 339.005004ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 22:57:44.274466  728361 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:57:44.276931  728361 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:57:44.276950  728361 node_conditions.go:123] node cpu capacity is 2
	I1025 22:57:44.276977  728361 node_conditions.go:105] duration metric: took 2.502915ms to run NodePressure ...
	I1025 22:57:44.276992  728361 start.go:241] waiting for startup goroutines ...
	I1025 22:57:44.300122  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:44.327780  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 22:57:44.327815  728361 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 22:57:44.334907  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 22:57:44.334936  728361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 22:57:44.365482  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 22:57:44.365518  728361 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 22:57:44.376945  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:44.441691  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 22:57:44.441722  728361 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 22:57:44.443225  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 22:57:44.443247  728361 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 22:57:44.510983  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:44.511014  728361 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 22:57:44.522596  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 22:57:44.522631  728361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 22:57:44.593578  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:44.600368  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 22:57:44.600392  728361 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 22:57:44.687614  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 22:57:44.687642  728361 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 22:57:44.726363  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 22:57:44.726391  728361 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 22:57:44.771220  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 22:57:44.771247  728361 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 22:57:44.800050  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:44.800079  728361 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 22:57:44.875738  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:46.117050  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816877105s)
	I1025 22:57:46.117115  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.740124565s)
	I1025 22:57:46.117165  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117185  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117211  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.52359958s)
	I1025 22:57:46.117120  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117287  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117247  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117367  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117495  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.117543  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.117552  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.117560  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117567  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117623  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.117642  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.117663  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.117671  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.117679  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117687  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117713  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.117739  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.117751  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117767  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.120140  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.120155  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.120155  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.120172  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.120168  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.120191  728361 addons.go:475] Verifying addon metrics-server=true in "newest-cni-357495"
	I1025 22:57:46.120226  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.120252  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.120604  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.120614  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.137578  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.137598  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.137943  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.137945  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.137973  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.545157  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.669353935s)
	I1025 22:57:46.545231  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.545247  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.545621  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.545660  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.545679  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.545693  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.545954  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.545969  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.547693  728361 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-357495 addons enable metrics-server
	
	I1025 22:57:46.549219  728361 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I1025 22:57:46.550703  728361 addons.go:510] duration metric: took 2.615173183s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I1025 22:57:46.550752  728361 start.go:246] waiting for cluster config update ...
	I1025 22:57:46.550768  728361 start.go:255] writing updated cluster config ...
	I1025 22:57:46.551105  728361 ssh_runner.go:195] Run: rm -f paused
	I1025 22:57:46.603794  728361 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 22:57:46.605589  728361 out.go:177] * Done! kubectl is now configured to use "newest-cni-357495" cluster and "default" namespace by default
	I1025 22:57:45.312071  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:45.325800  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:45.325881  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:45.370543  726389 cri.go:89] found id: ""
	I1025 22:57:45.370572  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.370582  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:45.370590  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:45.370659  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:45.411970  726389 cri.go:89] found id: ""
	I1025 22:57:45.412009  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.412022  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:45.412032  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:45.412099  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:45.445037  726389 cri.go:89] found id: ""
	I1025 22:57:45.445073  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.445085  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:45.445094  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:45.445158  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:45.483563  726389 cri.go:89] found id: ""
	I1025 22:57:45.483595  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.483607  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:45.483615  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:45.483683  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:45.522944  726389 cri.go:89] found id: ""
	I1025 22:57:45.522978  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.522991  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:45.522999  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:45.523060  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:45.558055  726389 cri.go:89] found id: ""
	I1025 22:57:45.558086  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.558099  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:45.558107  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:45.558172  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:45.591533  726389 cri.go:89] found id: ""
	I1025 22:57:45.591564  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.591574  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:45.591581  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:45.591651  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:45.634951  726389 cri.go:89] found id: ""
	I1025 22:57:45.634985  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.634996  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:45.635009  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:45.635026  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:45.684807  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:45.684847  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:45.699038  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:45.699072  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:45.762687  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:45.762718  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:45.762736  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:45.851222  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:45.851265  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:48.389992  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:48.403774  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:48.403842  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:48.441883  726389 cri.go:89] found id: ""
	I1025 22:57:48.441908  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.441919  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:48.441929  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:48.441982  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:48.477527  726389 cri.go:89] found id: ""
	I1025 22:57:48.477550  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.477558  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:48.477564  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:48.477612  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:48.514457  726389 cri.go:89] found id: ""
	I1025 22:57:48.514489  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.514500  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:48.514510  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:48.514579  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:48.551264  726389 cri.go:89] found id: ""
	I1025 22:57:48.551296  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.551306  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:48.551312  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:48.551369  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:48.585426  726389 cri.go:89] found id: ""
	I1025 22:57:48.585454  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.585465  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:48.585473  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:48.585537  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:48.623734  726389 cri.go:89] found id: ""
	I1025 22:57:48.623772  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.623785  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:48.623794  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:48.623865  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:48.661170  726389 cri.go:89] found id: ""
	I1025 22:57:48.661207  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.661219  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:48.661227  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:48.661304  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:48.700776  726389 cri.go:89] found id: ""
	I1025 22:57:48.700803  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.700812  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:48.700825  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:48.700842  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:48.753294  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:48.753326  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:48.770412  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:48.770443  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:48.847535  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:48.847562  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:48.847577  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:48.920817  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:48.920862  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:51.460695  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:51.473870  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:51.473945  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:51.510350  726389 cri.go:89] found id: ""
	I1025 22:57:51.510383  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.510393  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:51.510406  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:51.510480  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:51.546705  726389 cri.go:89] found id: ""
	I1025 22:57:51.546742  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.546754  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:51.546762  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:51.546830  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:51.583728  726389 cri.go:89] found id: ""
	I1025 22:57:51.583759  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.583767  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:51.583774  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:51.583831  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:51.623229  726389 cri.go:89] found id: ""
	I1025 22:57:51.623260  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.623269  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:51.623275  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:51.623332  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:51.661673  726389 cri.go:89] found id: ""
	I1025 22:57:51.661700  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.661710  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:51.661716  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:51.661769  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:51.707516  726389 cri.go:89] found id: ""
	I1025 22:57:51.707551  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.707564  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:51.707572  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:51.707646  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:51.745242  726389 cri.go:89] found id: ""
	I1025 22:57:51.745277  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.745288  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:51.745295  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:51.745360  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:51.778136  726389 cri.go:89] found id: ""
	I1025 22:57:51.778165  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.778180  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:51.778193  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:51.778210  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:51.826323  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:51.826365  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:51.839635  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:51.839673  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:51.905218  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:51.905242  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:51.905260  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:51.979641  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:51.979680  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:54.519362  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:54.532482  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:54.532560  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:54.566193  726389 cri.go:89] found id: ""
	I1025 22:57:54.566221  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.566232  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:54.566240  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:54.566304  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:54.602139  726389 cri.go:89] found id: ""
	I1025 22:57:54.602166  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.602178  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:54.602187  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:54.602245  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:54.636484  726389 cri.go:89] found id: ""
	I1025 22:57:54.636519  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.636529  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:54.636545  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:54.636610  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:54.670617  726389 cri.go:89] found id: ""
	I1025 22:57:54.670649  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.670660  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:54.670666  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:54.670726  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:54.702360  726389 cri.go:89] found id: ""
	I1025 22:57:54.702400  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.702412  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:54.702420  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:54.702491  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:54.736101  726389 cri.go:89] found id: ""
	I1025 22:57:54.736140  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.736153  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:54.736161  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:54.736225  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:54.768706  726389 cri.go:89] found id: ""
	I1025 22:57:54.768744  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.768757  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:54.768766  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:54.768828  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:54.800919  726389 cri.go:89] found id: ""
	I1025 22:57:54.800965  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.800978  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:54.800989  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:54.801008  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:54.866242  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:54.866277  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:54.866294  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:54.942084  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:54.942127  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:54.979383  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:54.979422  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:55.029227  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:55.029269  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:57.543312  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:57.557090  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:57.557176  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:57.594813  726389 cri.go:89] found id: ""
	I1025 22:57:57.594847  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.594860  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:57.594868  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:57.594933  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:57.629736  726389 cri.go:89] found id: ""
	I1025 22:57:57.629769  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.629781  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:57.629790  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:57.629855  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:57.663895  726389 cri.go:89] found id: ""
	I1025 22:57:57.663927  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.663935  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:57.663940  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:57.663991  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:57.696122  726389 cri.go:89] found id: ""
	I1025 22:57:57.696153  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.696164  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:57.696171  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:57.696238  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:57.733740  726389 cri.go:89] found id: ""
	I1025 22:57:57.733769  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.733778  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:57.733785  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:57.733839  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:57.766855  726389 cri.go:89] found id: ""
	I1025 22:57:57.766886  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.766897  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:57.766905  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:57.766971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:57.804080  726389 cri.go:89] found id: ""
	I1025 22:57:57.804110  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.804118  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:57.804125  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:57.804178  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:57.837482  726389 cri.go:89] found id: ""
	I1025 22:57:57.837511  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.837520  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:57.837530  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:57.837542  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:57.889217  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:57.889265  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:57.902999  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:57.903039  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:57.968303  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:57.968327  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:57.968345  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:58.046929  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:58.046981  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:00.589410  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:00.602271  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:00.602344  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:00.635947  726389 cri.go:89] found id: ""
	I1025 22:58:00.635980  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.635989  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:00.635995  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:00.636057  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:00.668039  726389 cri.go:89] found id: ""
	I1025 22:58:00.668072  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.668083  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:00.668092  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:00.668163  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:00.700889  726389 cri.go:89] found id: ""
	I1025 22:58:00.700916  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.700925  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:00.700931  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:00.701026  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:00.734409  726389 cri.go:89] found id: ""
	I1025 22:58:00.734440  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.734452  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:00.734459  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:00.734527  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:00.770435  726389 cri.go:89] found id: ""
	I1025 22:58:00.770462  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.770469  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:00.770476  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:00.770535  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:00.803431  726389 cri.go:89] found id: ""
	I1025 22:58:00.803466  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.803477  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:00.803486  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:00.803552  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:00.837896  726389 cri.go:89] found id: ""
	I1025 22:58:00.837932  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.837943  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:00.837951  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:00.838025  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:00.875375  726389 cri.go:89] found id: ""
	I1025 22:58:00.875414  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.875425  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:00.875437  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:00.875453  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:00.925019  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:00.925057  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:00.938018  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:00.938050  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:01.008170  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:01.008199  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:01.008216  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:01.082487  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:01.082530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:03.623673  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:03.637286  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:03.637371  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:03.673836  726389 cri.go:89] found id: ""
	I1025 22:58:03.673884  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.673897  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:03.673906  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:03.673971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:03.706700  726389 cri.go:89] found id: ""
	I1025 22:58:03.706731  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.706742  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:03.706750  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:03.706818  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:03.738775  726389 cri.go:89] found id: ""
	I1025 22:58:03.738804  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.738815  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:03.738823  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:03.738889  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:03.770246  726389 cri.go:89] found id: ""
	I1025 22:58:03.770274  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.770284  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:03.770292  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:03.770366  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:03.811193  726389 cri.go:89] found id: ""
	I1025 22:58:03.811222  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.811231  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:03.811237  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:03.811290  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:03.842644  726389 cri.go:89] found id: ""
	I1025 22:58:03.842678  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.842686  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:03.842693  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:03.842750  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:03.874753  726389 cri.go:89] found id: ""
	I1025 22:58:03.874780  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.874788  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:03.874794  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:03.874845  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:03.907133  726389 cri.go:89] found id: ""
	I1025 22:58:03.907162  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.907173  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:03.907186  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:03.907202  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:03.957250  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:03.957287  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:03.970381  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:03.970408  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:04.033620  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:04.033647  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:04.033663  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:04.108254  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:04.108296  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:06.647214  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:06.660871  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:06.660942  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:06.694191  726389 cri.go:89] found id: ""
	I1025 22:58:06.694223  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.694232  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:06.694243  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:06.694295  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:06.728177  726389 cri.go:89] found id: ""
	I1025 22:58:06.728209  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.728222  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:06.728229  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:06.728300  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:06.761968  726389 cri.go:89] found id: ""
	I1025 22:58:06.762003  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.762015  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:06.762022  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:06.762089  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:06.794139  726389 cri.go:89] found id: ""
	I1025 22:58:06.794172  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.794186  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:06.794195  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:06.794261  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:06.830436  726389 cri.go:89] found id: ""
	I1025 22:58:06.830468  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.830481  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:06.830490  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:06.830557  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:06.865350  726389 cri.go:89] found id: ""
	I1025 22:58:06.865391  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.865405  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:06.865412  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:06.865468  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:06.899259  726389 cri.go:89] found id: ""
	I1025 22:58:06.899288  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.899298  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:06.899304  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:06.899354  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:06.930753  726389 cri.go:89] found id: ""
	I1025 22:58:06.930784  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.930793  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:06.930802  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:06.930813  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:06.943437  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:06.943464  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:07.012837  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:07.012862  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:07.012875  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:07.085555  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:07.085606  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:07.125421  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:07.125464  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:09.678235  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:09.691802  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:09.691884  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:09.730774  726389 cri.go:89] found id: ""
	I1025 22:58:09.730813  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.730826  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:09.730838  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:09.730893  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:09.768841  726389 cri.go:89] found id: ""
	I1025 22:58:09.768878  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.768894  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:09.768903  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:09.768984  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:09.802970  726389 cri.go:89] found id: ""
	I1025 22:58:09.803001  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.803013  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:09.803022  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:09.803093  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:09.835041  726389 cri.go:89] found id: ""
	I1025 22:58:09.835075  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.835087  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:09.835095  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:09.835148  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:09.868561  726389 cri.go:89] found id: ""
	I1025 22:58:09.868590  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.868601  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:09.868609  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:09.868689  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:09.901694  726389 cri.go:89] found id: ""
	I1025 22:58:09.901721  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.901730  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:09.901737  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:09.901793  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:09.936138  726389 cri.go:89] found id: ""
	I1025 22:58:09.936167  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.936178  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:09.936187  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:09.936250  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:09.969041  726389 cri.go:89] found id: ""
	I1025 22:58:09.969075  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.969087  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:09.969100  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:09.969115  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:10.036786  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:10.036816  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:10.036832  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:10.108946  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:10.109015  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:10.150241  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:10.150278  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:10.201815  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:10.201862  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:12.715673  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:12.729286  726389 kubeadm.go:597] duration metric: took 4m4.085037105s to restartPrimaryControlPlane
	W1025 22:58:12.729380  726389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 22:58:12.729407  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 22:58:13.183339  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:58:13.197871  726389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:58:13.207895  726389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:58:13.217907  726389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:58:13.217929  726389 kubeadm.go:157] found existing configuration files:
	
	I1025 22:58:13.217990  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:58:13.227351  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:58:13.227422  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:58:13.237158  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:58:13.246361  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:58:13.246431  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:58:13.256260  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:58:13.265821  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:58:13.265885  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:58:13.275535  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:58:13.284737  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:58:13.284804  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:58:13.294340  726389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 22:58:13.357520  726389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 22:58:13.357618  726389 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 22:58:13.492934  726389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:58:13.493109  726389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:58:13.493237  726389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:58:13.676988  726389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:58:13.679089  726389 out.go:235]   - Generating certificates and keys ...
	I1025 22:58:13.679191  726389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 22:58:13.679294  726389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 22:58:13.679410  726389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 22:58:13.679499  726389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 22:58:13.679591  726389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 22:58:13.679673  726389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 22:58:13.679773  726389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 22:58:13.679860  726389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 22:58:13.679958  726389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 22:58:13.680063  726389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 22:58:13.680117  726389 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 22:58:13.680195  726389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:58:13.792687  726389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:58:13.867665  726389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:58:14.014215  726389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:58:14.157457  726389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:58:14.181574  726389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:58:14.181693  726389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:58:14.181766  726389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 22:58:14.322320  726389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:58:14.324285  726389 out.go:235]   - Booting up control plane ...
	I1025 22:58:14.324402  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:58:14.328027  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:58:14.331034  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:58:14.332233  726389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:58:14.340260  726389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:58:54.338405  726389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:58:54.338592  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:58:54.338841  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:58:59.339365  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:58:59.339661  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:59:09.340395  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:59:09.340593  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:59:29.341629  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:59:29.341864  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:09.342793  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:09.343142  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:09.343171  726389 kubeadm.go:310] 
	I1025 23:00:09.343244  726389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 23:00:09.343309  726389 kubeadm.go:310] 		timed out waiting for the condition
	I1025 23:00:09.343320  726389 kubeadm.go:310] 
	I1025 23:00:09.343358  726389 kubeadm.go:310] 	This error is likely caused by:
	I1025 23:00:09.343390  726389 kubeadm.go:310] 		- The kubelet is not running
	I1025 23:00:09.343481  726389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 23:00:09.343489  726389 kubeadm.go:310] 
	I1025 23:00:09.343609  726389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 23:00:09.343655  726389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 23:00:09.343701  726389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 23:00:09.343711  726389 kubeadm.go:310] 
	I1025 23:00:09.343811  726389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 23:00:09.343886  726389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 23:00:09.343898  726389 kubeadm.go:310] 
	I1025 23:00:09.344020  726389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 23:00:09.344148  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 23:00:09.344258  726389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 23:00:09.344355  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 23:00:09.344365  726389 kubeadm.go:310] 
	I1025 23:00:09.345056  726389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 23:00:09.345170  726389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 23:00:09.345261  726389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1025 23:00:09.345502  726389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 23:00:09.345550  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 23:00:09.805116  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 23:00:09.820225  726389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 23:00:09.829679  726389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 23:00:09.829702  726389 kubeadm.go:157] found existing configuration files:
	
	I1025 23:00:09.829756  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 23:00:09.838792  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 23:00:09.838857  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 23:00:09.847823  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 23:00:09.856364  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 23:00:09.856422  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 23:00:09.865400  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 23:00:09.873766  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 23:00:09.873827  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 23:00:09.882969  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 23:00:09.891527  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 23:00:09.891606  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 23:00:09.900940  726389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 23:00:09.969506  726389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 23:00:09.969568  726389 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 23:00:10.115097  726389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 23:00:10.115224  726389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 23:00:10.115397  726389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 23:00:10.293601  726389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 23:00:10.296142  726389 out.go:235]   - Generating certificates and keys ...
	I1025 23:00:10.296255  726389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 23:00:10.296361  726389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 23:00:10.296502  726389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 23:00:10.296583  726389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 23:00:10.296676  726389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 23:00:10.296748  726389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 23:00:10.296840  726389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 23:00:10.296949  726389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 23:00:10.297071  726389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 23:00:10.297182  726389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 23:00:10.297236  726389 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 23:00:10.297334  726389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 23:00:10.411124  726389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 23:00:10.530014  726389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 23:00:10.624647  726389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 23:00:10.777858  726389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 23:00:10.797014  726389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 23:00:10.798078  726389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 23:00:10.798168  726389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 23:00:10.940610  726389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 23:00:10.942427  726389 out.go:235]   - Booting up control plane ...
	I1025 23:00:10.942572  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 23:00:10.959667  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 23:00:10.959757  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 23:00:10.959910  726389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 23:00:10.963884  726389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 23:00:50.966097  726389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 23:00:50.966211  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:50.966448  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:55.966794  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:55.967051  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:01:05.967421  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:01:05.967674  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:01:25.968507  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:01:25.968765  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:02:05.969405  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:02:05.969627  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:02:05.969639  726389 kubeadm.go:310] 
	I1025 23:02:05.969676  726389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 23:02:05.969777  726389 kubeadm.go:310] 		timed out waiting for the condition
	I1025 23:02:05.969821  726389 kubeadm.go:310] 
	I1025 23:02:05.969885  726389 kubeadm.go:310] 	This error is likely caused by:
	I1025 23:02:05.969935  726389 kubeadm.go:310] 		- The kubelet is not running
	I1025 23:02:05.970078  726389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 23:02:05.970092  726389 kubeadm.go:310] 
	I1025 23:02:05.970248  726389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 23:02:05.970290  726389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 23:02:05.970375  726389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 23:02:05.970388  726389 kubeadm.go:310] 
	I1025 23:02:05.970517  726389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 23:02:05.970595  726389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 23:02:05.970602  726389 kubeadm.go:310] 
	I1025 23:02:05.970729  726389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 23:02:05.970840  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 23:02:05.970914  726389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 23:02:05.971019  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 23:02:05.971031  726389 kubeadm.go:310] 
	I1025 23:02:05.971808  726389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 23:02:05.971923  726389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 23:02:05.972087  726389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1025 23:02:05.972124  726389 kubeadm.go:394] duration metric: took 7m57.377970738s to StartCluster
	I1025 23:02:05.972182  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 23:02:05.972244  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 23:02:06.012800  726389 cri.go:89] found id: ""
	I1025 23:02:06.012837  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.012852  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 23:02:06.012860  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 23:02:06.012925  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 23:02:06.051712  726389 cri.go:89] found id: ""
	I1025 23:02:06.051748  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.051761  726389 logs.go:284] No container was found matching "etcd"
	I1025 23:02:06.051769  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 23:02:06.051834  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 23:02:06.084904  726389 cri.go:89] found id: ""
	I1025 23:02:06.084939  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.084950  726389 logs.go:284] No container was found matching "coredns"
	I1025 23:02:06.084973  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 23:02:06.085056  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 23:02:06.120083  726389 cri.go:89] found id: ""
	I1025 23:02:06.120121  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.120133  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 23:02:06.120140  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 23:02:06.120197  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 23:02:06.154172  726389 cri.go:89] found id: ""
	I1025 23:02:06.154197  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.154205  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 23:02:06.154211  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 23:02:06.154261  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 23:02:06.187085  726389 cri.go:89] found id: ""
	I1025 23:02:06.187130  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.187143  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 23:02:06.187152  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 23:02:06.187220  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 23:02:06.220391  726389 cri.go:89] found id: ""
	I1025 23:02:06.220421  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.220430  726389 logs.go:284] No container was found matching "kindnet"
	I1025 23:02:06.220437  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 23:02:06.220503  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 23:02:06.254240  726389 cri.go:89] found id: ""
	I1025 23:02:06.254274  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.254286  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 23:02:06.254301  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 23:02:06.254340  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 23:02:06.301861  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 23:02:06.301907  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 23:02:06.315888  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 23:02:06.315919  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 23:02:06.386034  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 23:02:06.386073  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 23:02:06.386091  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 23:02:06.487167  726389 logs.go:123] Gathering logs for container status ...
	I1025 23:02:06.487216  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 23:02:06.539615  726389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 23:02:06.539690  726389 out.go:270] * 
	W1025 23:02:06.539895  726389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 23:02:06.539922  726389 out.go:270] * 
	W1025 23:02:06.540790  726389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 23:02:06.545196  726389 out.go:201] 
	W1025 23:02:06.546506  726389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 23:02:06.546544  726389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 23:02:06.546564  726389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 23:02:06.548055  726389 out.go:201] 
	
	
	==> CRI-O <==
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.256501434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729897869256480040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=924667e9-514f-48b6-92e0-6093a9ededf5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.257001256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14199337-968c-4667-8d7d-fc144cdbcd9b name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.257075328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14199337-968c-4667-8d7d-fc144cdbcd9b name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.257109763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=14199337-968c-4667-8d7d-fc144cdbcd9b name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.291490985Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50a46ed9-0446-453a-a700-fd1fb8c8448d name=/runtime.v1.RuntimeService/Version
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.291585327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50a46ed9-0446-453a-a700-fd1fb8c8448d name=/runtime.v1.RuntimeService/Version
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.292721695Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e47cf61-bdc8-423f-a667-47ce0295a3fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.293110881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729897869293087650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e47cf61-bdc8-423f-a667-47ce0295a3fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.293575731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01808dba-9a91-4800-bda7-daf347c1e8c5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.293646598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01808dba-9a91-4800-bda7-daf347c1e8c5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.293731777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=01808dba-9a91-4800-bda7-daf347c1e8c5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.328992335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3558b2c-6281-4125-b160-3ae3999eb977 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.329079123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3558b2c-6281-4125-b160-3ae3999eb977 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.330312094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b118ae42-b3c1-4618-92f0-d024e2f10b74 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.330772195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729897869330752093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b118ae42-b3c1-4618-92f0-d024e2f10b74 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.331237510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46d839d8-6ceb-417d-ae11-f3994be2f62c name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.331308242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46d839d8-6ceb-417d-ae11-f3994be2f62c name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.331348843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=46d839d8-6ceb-417d-ae11-f3994be2f62c name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.368835821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efc2944f-906a-4710-8526-d3f937dd5d59 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.368929366Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efc2944f-906a-4710-8526-d3f937dd5d59 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.370186879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e55798e-5e93-4590-86f3-4ac197841d52 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.370538914Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729897869370521441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e55798e-5e93-4590-86f3-4ac197841d52 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.371096268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67046c2a-3b1c-4ec0-8888-eee021a6934c name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.371151573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67046c2a-3b1c-4ec0-8888-eee021a6934c name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:11:09 old-k8s-version-005932 crio[631]: time="2024-10-25 23:11:09.371181703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67046c2a-3b1c-4ec0-8888-eee021a6934c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct25 22:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053538] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043103] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.993033] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.634497] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.621665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct25 22:54] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.064930] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061174] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.184894] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.167513] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.254112] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.419742] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.063304] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.826111] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +11.981319] kauditd_printk_skb: 46 callbacks suppressed
	[Oct25 22:58] systemd-fstab-generator[5077]: Ignoring "noauto" option for root device
	[Oct25 23:00] systemd-fstab-generator[5358]: Ignoring "noauto" option for root device
	[  +0.059452] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:11:09 up 17 min,  0 users,  load average: 0.00, 0.03, 0.06
	Linux old-k8s-version-005932 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000921680)
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]: goroutine 161 [select]:
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b91ef0, 0x4f0ac20, 0xc000051c70, 0x1, 0xc00009e0c0)
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0008cec40, 0xc00009e0c0)
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b6a3f0, 0xc00092dec0)
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6543]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 25 23:11:06 old-k8s-version-005932 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 25 23:11:06 old-k8s-version-005932 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 25 23:11:06 old-k8s-version-005932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 25 23:11:06 old-k8s-version-005932 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 25 23:11:06 old-k8s-version-005932 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6553]: I1025 23:11:06.976242    6553 server.go:416] Version: v1.20.0
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6553]: I1025 23:11:06.976532    6553 server.go:837] Client rotation is on, will bootstrap in background
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6553]: I1025 23:11:06.978371    6553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6553]: W1025 23:11:06.979301    6553 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 25 23:11:06 old-k8s-version-005932 kubelet[6553]: I1025 23:11:06.979393    6553 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 2 (224.314143ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-005932" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (388.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:11:10.024463  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:11:47.714818  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:12:22.937171  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:12:57.097433  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:13:06.946722  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:14:06.911699  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:14:13.839513  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/no-preload-657458/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:14:28.274658  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/default-k8s-diff-port-166447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:14:40.903135  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:14:42.624136  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:15:20.932822  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:15:36.905398  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/no-preload-657458/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:15:47.013544  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:15:51.339994  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/default-k8s-diff-port-166447/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:16:47.714396  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
E1025 23:17:22.936102  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.215:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.215:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 2 (229.423726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-005932" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-005932 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-005932 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.875µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-005932 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 2 (226.034179ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-005932 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-601894 image list                          | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| delete  | -p embed-certs-601894                                  | embed-certs-601894           | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| start   | -p newest-cni-357495 --memory=2200 --alsologtostderr   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-657458 image list                           | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| delete  | -p no-preload-657458                                   | no-preload-657458            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	| addons  | enable metrics-server -p newest-cni-357495             | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:56 UTC | 25 Oct 24 22:57 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-357495                  | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-357495 --memory=2200 --alsologtostderr   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-166447                           | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-166447 | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | default-k8s-diff-port-166447                           |                              |         |         |                     |                     |
	| image   | newest-cni-357495 image list                           | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	| delete  | -p newest-cni-357495                                   | newest-cni-357495            | jenkins | v1.34.0 | 25 Oct 24 22:57 UTC | 25 Oct 24 22:57 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 22:57:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 22:57:09.006096  728361 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:57:09.006201  728361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:57:09.006209  728361 out.go:358] Setting ErrFile to fd 2...
	I1025 22:57:09.006214  728361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:57:09.006451  728361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:57:09.006988  728361 out.go:352] Setting JSON to false
	I1025 22:57:09.007986  728361 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":20373,"bootTime":1729876656,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:57:09.008093  728361 start.go:139] virtualization: kvm guest
	I1025 22:57:09.010465  728361 out.go:177] * [newest-cni-357495] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:57:09.011802  728361 notify.go:220] Checking for updates...
	I1025 22:57:09.011839  728361 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 22:57:09.013146  728361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:57:09.014475  728361 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:09.015727  728361 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:57:09.016972  728361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:57:09.018210  728361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:57:09.019736  728361 config.go:182] Loaded profile config "newest-cni-357495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:09.020150  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:09.020224  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:09.035482  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I1025 22:57:09.035920  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:09.036595  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:09.036617  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:09.037009  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:09.037247  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:09.037593  728361 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 22:57:09.037912  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:09.037954  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:09.053072  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33229
	I1025 22:57:09.053595  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:09.054218  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:09.054244  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:09.054588  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:09.054779  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:09.090073  728361 out.go:177] * Using the kvm2 driver based on existing profile
	I1025 22:57:09.091244  728361 start.go:297] selected driver: kvm2
	I1025 22:57:09.091260  728361 start.go:901] validating driver "kvm2" against &{Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:57:09.091400  728361 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:57:09.092078  728361 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:57:09.092162  728361 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 22:57:09.107070  728361 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 22:57:09.107505  728361 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 22:57:09.107537  728361 cni.go:84] Creating CNI manager for ""
	I1025 22:57:09.107588  728361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:09.107626  728361 start.go:340] cluster config:
	{Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-357495 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:57:09.107743  728361 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:57:09.109586  728361 out.go:177] * Starting "newest-cni-357495" primary control-plane node in "newest-cni-357495" cluster
	I1025 22:57:09.110853  728361 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 22:57:09.110886  728361 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1025 22:57:09.110896  728361 cache.go:56] Caching tarball of preloaded images
	I1025 22:57:09.111001  728361 preload.go:172] Found /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1025 22:57:09.111015  728361 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1025 22:57:09.111159  728361 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/config.json ...
	I1025 22:57:09.111340  728361 start.go:360] acquireMachinesLock for newest-cni-357495: {Name:mkb1c973e2c1c03aebfdabc66f56a99606a6fa0b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 22:57:09.111385  728361 start.go:364] duration metric: took 26.544µs to acquireMachinesLock for "newest-cni-357495"
	I1025 22:57:09.111405  728361 start.go:96] Skipping create...Using existing machine configuration
	I1025 22:57:09.111420  728361 fix.go:54] fixHost starting: 
	I1025 22:57:09.111679  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:09.111715  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:09.126695  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I1025 22:57:09.127148  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:09.127662  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:09.127683  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:09.128015  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:09.128203  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:09.128345  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:09.129983  728361 fix.go:112] recreateIfNeeded on newest-cni-357495: state=Stopped err=<nil>
	I1025 22:57:09.130022  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	W1025 22:57:09.130181  728361 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 22:57:09.131768  728361 out.go:177] * Restarting existing kvm2 VM for "newest-cni-357495" ...
	I1025 22:57:04.664834  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:04.677759  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:04.677820  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:04.710557  726389 cri.go:89] found id: ""
	I1025 22:57:04.710585  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.710594  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:04.710601  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:04.710655  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:04.747197  726389 cri.go:89] found id: ""
	I1025 22:57:04.747225  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.747234  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:04.747240  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:04.747288  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:04.787986  726389 cri.go:89] found id: ""
	I1025 22:57:04.788018  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.788027  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:04.788034  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:04.788091  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:04.819796  726389 cri.go:89] found id: ""
	I1025 22:57:04.819824  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.819833  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:04.819839  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:04.819887  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:04.856885  726389 cri.go:89] found id: ""
	I1025 22:57:04.856925  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.856938  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:04.856946  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:04.857021  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:04.901723  726389 cri.go:89] found id: ""
	I1025 22:57:04.901759  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.901770  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:04.901779  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:04.901846  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:04.943775  726389 cri.go:89] found id: ""
	I1025 22:57:04.943810  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.943821  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:04.943830  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:04.943893  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:04.985957  726389 cri.go:89] found id: ""
	I1025 22:57:04.985982  726389 logs.go:282] 0 containers: []
	W1025 22:57:04.985991  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:04.986000  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:04.986012  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:05.061490  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:05.061529  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:05.103028  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:05.103059  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:05.152607  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:05.152644  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:05.167577  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:05.167624  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:05.246428  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:07.747514  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:07.764567  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:07.764653  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:07.804356  726389 cri.go:89] found id: ""
	I1025 22:57:07.804453  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.804479  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:07.804498  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:07.804594  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:07.852155  726389 cri.go:89] found id: ""
	I1025 22:57:07.852190  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.852201  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:07.852210  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:07.852287  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:07.906149  726389 cri.go:89] found id: ""
	I1025 22:57:07.906195  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.906209  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:07.906237  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:07.906321  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:07.946134  726389 cri.go:89] found id: ""
	I1025 22:57:07.946165  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.946177  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:07.946189  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:07.946257  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:07.994191  726389 cri.go:89] found id: ""
	I1025 22:57:07.994225  726389 logs.go:282] 0 containers: []
	W1025 22:57:07.994243  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:07.994252  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:07.994324  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:08.039254  726389 cri.go:89] found id: ""
	I1025 22:57:08.039284  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.039296  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:08.039303  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:08.039370  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:08.083985  726389 cri.go:89] found id: ""
	I1025 22:57:08.084016  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.084027  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:08.084034  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:08.084100  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:08.121051  726389 cri.go:89] found id: ""
	I1025 22:57:08.121084  726389 logs.go:282] 0 containers: []
	W1025 22:57:08.121096  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:08.121111  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:08.121128  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:08.210698  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:08.210743  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:08.251297  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:08.251326  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:08.309007  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:08.309049  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:08.323243  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:08.323281  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:08.395704  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:06.985771  725359 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001894992s
	I1025 22:57:06.985860  725359 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1025 22:57:11.989818  725359 kubeadm.go:310] [api-check] The API server is healthy after 5.002310213s
	I1025 22:57:12.000090  725359 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 22:57:12.029347  725359 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 22:57:12.065009  725359 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 22:57:12.065298  725359 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-166447 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 22:57:12.080390  725359 kubeadm.go:310] [bootstrap-token] Using token: gn84c5.mnibhpx86csafbn4
	I1025 22:57:12.081888  725359 out.go:235]   - Configuring RBAC rules ...
	I1025 22:57:12.082040  725359 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 22:57:12.094696  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 22:57:12.107652  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 22:57:12.112673  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 22:57:12.118594  725359 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 22:57:12.131842  725359 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 22:57:12.397191  725359 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 22:57:12.821901  725359 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 22:57:13.393906  725359 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 22:57:13.394919  725359 kubeadm.go:310] 
	I1025 22:57:13.395007  725359 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 22:57:13.395019  725359 kubeadm.go:310] 
	I1025 22:57:13.395120  725359 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 22:57:13.395130  725359 kubeadm.go:310] 
	I1025 22:57:13.395163  725359 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 22:57:13.395252  725359 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 22:57:13.395324  725359 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 22:57:13.395333  725359 kubeadm.go:310] 
	I1025 22:57:13.395388  725359 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 22:57:13.395398  725359 kubeadm.go:310] 
	I1025 22:57:13.395460  725359 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 22:57:13.395470  725359 kubeadm.go:310] 
	I1025 22:57:13.395533  725359 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 22:57:13.395623  725359 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 22:57:13.395711  725359 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 22:57:13.395735  725359 kubeadm.go:310] 
	I1025 22:57:13.395856  725359 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 22:57:13.395977  725359 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 22:57:13.395991  725359 kubeadm.go:310] 
	I1025 22:57:13.396103  725359 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gn84c5.mnibhpx86csafbn4 \
	I1025 22:57:13.396257  725359 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a \
	I1025 22:57:13.396290  725359 kubeadm.go:310] 	--control-plane 
	I1025 22:57:13.396299  725359 kubeadm.go:310] 
	I1025 22:57:13.396418  725359 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 22:57:13.396428  725359 kubeadm.go:310] 
	I1025 22:57:13.396539  725359 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gn84c5.mnibhpx86csafbn4 \
	I1025 22:57:13.396691  725359 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:382ffec0203534c7e6fe5736e3058caaac08d9601757dfa1800537afa103911a 
	I1025 22:57:13.397292  725359 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:57:13.397395  725359 cni.go:84] Creating CNI manager for ""
	I1025 22:57:13.397415  725359 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:13.399132  725359 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 22:57:09.132799  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Start
	I1025 22:57:09.133007  728361 main.go:141] libmachine: (newest-cni-357495) starting domain...
	I1025 22:57:09.133028  728361 main.go:141] libmachine: (newest-cni-357495) ensuring networks are active...
	I1025 22:57:09.133784  728361 main.go:141] libmachine: (newest-cni-357495) Ensuring network default is active
	I1025 22:57:09.134127  728361 main.go:141] libmachine: (newest-cni-357495) Ensuring network mk-newest-cni-357495 is active
	I1025 22:57:09.134535  728361 main.go:141] libmachine: (newest-cni-357495) getting domain XML...
	I1025 22:57:09.135259  728361 main.go:141] libmachine: (newest-cni-357495) creating domain...
	I1025 22:57:10.376675  728361 main.go:141] libmachine: (newest-cni-357495) waiting for IP...
	I1025 22:57:10.377919  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:10.378434  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:10.378529  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:10.378420  728395 retry.go:31] will retry after 234.774904ms: waiting for domain to come up
	I1025 22:57:10.615044  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:10.615713  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:10.615744  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:10.615692  728395 retry.go:31] will retry after 344.301388ms: waiting for domain to come up
	I1025 22:57:10.961349  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:10.961953  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:10.961987  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:10.961901  728395 retry.go:31] will retry after 439.472335ms: waiting for domain to come up
	I1025 22:57:11.403081  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:11.403801  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:11.403833  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:11.403754  728395 retry.go:31] will retry after 603.917881ms: waiting for domain to come up
	I1025 22:57:12.009100  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:12.009791  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:12.009816  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:12.009766  728395 retry.go:31] will retry after 654.012412ms: waiting for domain to come up
	I1025 22:57:12.665694  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:12.666298  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:12.666331  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:12.666254  728395 retry.go:31] will retry after 598.223644ms: waiting for domain to come up
	I1025 22:57:13.266161  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:13.266714  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:13.266746  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:13.266670  728395 retry.go:31] will retry after 807.374369ms: waiting for domain to come up
	I1025 22:57:10.896885  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:10.912430  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:10.912544  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:10.949298  726389 cri.go:89] found id: ""
	I1025 22:57:10.949332  726389 logs.go:282] 0 containers: []
	W1025 22:57:10.949345  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:10.949356  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:10.949420  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:10.992906  726389 cri.go:89] found id: ""
	I1025 22:57:10.992941  726389 logs.go:282] 0 containers: []
	W1025 22:57:10.992963  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:10.992972  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:10.993037  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:11.035283  726389 cri.go:89] found id: ""
	I1025 22:57:11.035312  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.035321  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:11.035329  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:11.035391  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:11.075912  726389 cri.go:89] found id: ""
	I1025 22:57:11.075945  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.075957  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:11.075966  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:11.076031  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:11.116675  726389 cri.go:89] found id: ""
	I1025 22:57:11.116709  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.116721  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:11.116727  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:11.116788  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:11.153210  726389 cri.go:89] found id: ""
	I1025 22:57:11.153244  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.153258  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:11.153267  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:11.153331  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:11.195233  726389 cri.go:89] found id: ""
	I1025 22:57:11.195266  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.195278  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:11.195285  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:11.195346  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:11.237164  726389 cri.go:89] found id: ""
	I1025 22:57:11.237195  726389 logs.go:282] 0 containers: []
	W1025 22:57:11.237206  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:11.237219  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:11.237236  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:11.299994  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:11.300043  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:11.316006  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:11.316055  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:11.404343  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:11.404368  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:11.404384  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:11.496349  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:11.496391  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:14.050229  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:14.064529  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:14.064615  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:14.101831  726389 cri.go:89] found id: ""
	I1025 22:57:14.101863  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.101877  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:14.101886  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:14.101950  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:14.139876  726389 cri.go:89] found id: ""
	I1025 22:57:14.139906  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.139915  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:14.139921  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:14.139982  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:14.175405  726389 cri.go:89] found id: ""
	I1025 22:57:14.175442  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.175454  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:14.175463  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:14.175535  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:14.220337  726389 cri.go:89] found id: ""
	I1025 22:57:14.220372  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.220392  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:14.220400  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:14.220471  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:14.262358  726389 cri.go:89] found id: ""
	I1025 22:57:14.262384  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.262393  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:14.262399  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:14.262457  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:14.303586  726389 cri.go:89] found id: ""
	I1025 22:57:14.303621  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.303629  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:14.303636  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:14.303687  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:14.343365  726389 cri.go:89] found id: ""
	I1025 22:57:14.343399  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.343411  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:14.343421  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:14.343494  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:14.376842  726389 cri.go:89] found id: ""
	I1025 22:57:14.376879  726389 logs.go:282] 0 containers: []
	W1025 22:57:14.376892  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:14.376905  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:14.376921  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:14.426780  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:14.426819  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:14.439976  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:14.440007  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:14.512226  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:14.512258  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:14.512276  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:14.588240  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:14.588284  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:13.400319  725359 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 22:57:13.410568  725359 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 22:57:13.431208  725359 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:57:13.431301  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:13.431322  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-166447 minikube.k8s.io/updated_at=2024_10_25T22_57_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=default-k8s-diff-port-166447 minikube.k8s.io/primary=true
	I1025 22:57:13.639716  725359 ops.go:34] apiserver oom_adj: -16
	I1025 22:57:13.639860  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:14.140884  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:14.639916  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:15.140843  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:15.640888  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:16.140691  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:16.640258  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:17.140873  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:17.640232  725359 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 22:57:17.748262  725359 kubeadm.go:1113] duration metric: took 4.317031918s to wait for elevateKubeSystemPrivileges
	I1025 22:57:17.748310  725359 kubeadm.go:394] duration metric: took 5m32.487100054s to StartCluster
	I1025 22:57:17.748334  725359 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:17.748440  725359 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:17.749728  725359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:17.750023  725359 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.249 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:57:17.750214  725359 config.go:182] Loaded profile config "default-k8s-diff-port-166447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:17.750280  725359 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 22:57:17.750383  725359 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.750403  725359 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.750412  725359 addons.go:243] addon storage-provisioner should already be in state true
	I1025 22:57:17.750443  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.750455  725359 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.750479  725359 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-166447"
	I1025 22:57:17.750472  725359 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.750509  725359 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.750518  725359 addons.go:243] addon dashboard should already be in state true
	I1025 22:57:17.750548  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.750880  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.750914  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.750968  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.750996  725359 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-166447"
	I1025 22:57:17.751003  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.751012  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.751019  725359 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.751028  725359 addons.go:243] addon metrics-server should already be in state true
	I1025 22:57:17.751043  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.751061  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.751477  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.751531  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.752307  725359 out.go:177] * Verifying Kubernetes components...
	I1025 22:57:17.754336  725359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:17.771639  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I1025 22:57:17.771674  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I1025 22:57:17.771640  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I1025 22:57:17.772091  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.772144  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.772781  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.772806  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.773002  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.773021  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.773179  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.773255  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.773747  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.773792  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.774065  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.774143  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.774156  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.774286  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.774620  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.775315  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.775393  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.777721  725359 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-166447"
	W1025 22:57:17.777747  725359 addons.go:243] addon default-storageclass should already be in state true
	I1025 22:57:17.777782  725359 host.go:66] Checking if "default-k8s-diff-port-166447" exists ...
	I1025 22:57:17.778158  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.778209  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.779137  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I1025 22:57:17.779690  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.780249  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.780270  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.780756  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.781301  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.781337  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.795859  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32807
	I1025 22:57:17.796354  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I1025 22:57:17.796527  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.796726  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.797032  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.797053  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.797488  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.797567  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.797584  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.797677  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.798041  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.798308  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.799791  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I1025 22:57:17.799971  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.800466  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.800716  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.801196  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.801221  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.801700  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.802363  725359 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:17.802448  725359 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:17.802478  725359 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:57:17.802546  725359 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1025 22:57:17.804194  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I1025 22:57:17.804511  725359 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:17.804535  725359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:57:17.804557  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.804629  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 22:57:17.804640  725359 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 22:57:17.804657  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.804697  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.805172  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.805189  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.805541  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.805768  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.809358  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.809694  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.810510  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.810544  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.810708  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.810784  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.810929  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.811051  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.811140  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:17.811287  725359 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 22:57:17.811466  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.811495  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.811518  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.811635  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.814016  725359 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 22:57:14.076273  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:14.076902  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:14.076934  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:14.076868  728395 retry.go:31] will retry after 1.185306059s: waiting for domain to come up
	I1025 22:57:15.263741  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:15.264326  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:15.264366  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:15.264273  728395 retry.go:31] will retry after 1.322346565s: waiting for domain to come up
	I1025 22:57:16.588814  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:16.589321  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:16.589347  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:16.589282  728395 retry.go:31] will retry after 1.73855821s: waiting for domain to come up
	I1025 22:57:18.330419  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:18.331024  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:18.331054  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:18.330973  728395 retry.go:31] will retry after 2.069940103s: waiting for domain to come up
	I1025 22:57:17.132197  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:17.146596  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:17.146674  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:17.185560  726389 cri.go:89] found id: ""
	I1025 22:57:17.185593  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.185603  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:17.185610  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:17.185670  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:17.220864  726389 cri.go:89] found id: ""
	I1025 22:57:17.220897  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.220910  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:17.220919  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:17.221004  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:17.260844  726389 cri.go:89] found id: ""
	I1025 22:57:17.260872  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.260880  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:17.260887  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:17.260939  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:17.302800  726389 cri.go:89] found id: ""
	I1025 22:57:17.302833  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.302845  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:17.302853  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:17.302913  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:17.337851  726389 cri.go:89] found id: ""
	I1025 22:57:17.337881  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.337892  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:17.337901  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:17.337959  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:17.374697  726389 cri.go:89] found id: ""
	I1025 22:57:17.374739  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.374752  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:17.374760  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:17.374827  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:17.419883  726389 cri.go:89] found id: ""
	I1025 22:57:17.419913  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.419923  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:17.419929  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:17.419981  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:17.453770  726389 cri.go:89] found id: ""
	I1025 22:57:17.453797  726389 logs.go:282] 0 containers: []
	W1025 22:57:17.453809  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:17.453821  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:17.453835  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:17.467935  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:17.467971  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:17.546221  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:17.546251  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:17.546269  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:17.655338  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:17.655395  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:17.696499  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:17.696531  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:17.815285  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 22:57:17.815304  725359 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 22:57:17.815325  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.821095  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.821105  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.821115  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.821128  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.821146  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.821336  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.821429  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:17.821740  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.821905  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:17.823391  725359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42213
	I1025 22:57:17.823756  725359 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:17.824397  725359 main.go:141] libmachine: Using API Version  1
	I1025 22:57:17.824420  725359 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:17.824819  725359 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:17.825001  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetState
	I1025 22:57:17.826499  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .DriverName
	I1025 22:57:17.826709  725359 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:17.826724  725359 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:57:17.826741  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHHostname
	I1025 22:57:17.829834  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.830223  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:40:a0", ip: ""} in network mk-default-k8s-diff-port-166447: {Iface:virbr1 ExpiryTime:2024-10-25 23:51:31 +0000 UTC Type:0 Mac:52:54:00:c1:40:a0 Iaid: IPaddr:192.168.61.249 Prefix:24 Hostname:default-k8s-diff-port-166447 Clientid:01:52:54:00:c1:40:a0}
	I1025 22:57:17.830256  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | domain default-k8s-diff-port-166447 has defined IP address 192.168.61.249 and MAC address 52:54:00:c1:40:a0 in network mk-default-k8s-diff-port-166447
	I1025 22:57:17.830391  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHPort
	I1025 22:57:17.830555  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHKeyPath
	I1025 22:57:17.830712  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .GetSSHUsername
	I1025 22:57:17.830834  725359 sshutil.go:53] new ssh client: &{IP:192.168.61.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/default-k8s-diff-port-166447/id_rsa Username:docker}
	I1025 22:57:18.014991  725359 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:57:18.036760  725359 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-166447" to be "Ready" ...
	I1025 22:57:18.078787  725359 node_ready.go:49] node "default-k8s-diff-port-166447" has status "Ready":"True"
	I1025 22:57:18.078820  725359 node_ready.go:38] duration metric: took 42.016031ms for node "default-k8s-diff-port-166447" to be "Ready" ...
	I1025 22:57:18.078834  725359 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:57:18.085830  725359 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:18.122468  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 22:57:18.122502  725359 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 22:57:18.151830  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:18.164388  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:18.181181  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 22:57:18.181212  725359 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 22:57:18.239075  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 22:57:18.239113  725359 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 22:57:18.269994  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 22:57:18.270026  725359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 22:57:18.332398  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 22:57:18.332427  725359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 22:57:18.431935  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 22:57:18.431970  725359 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 22:57:18.435490  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 22:57:18.435518  725359 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 22:57:18.514890  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 22:57:18.514925  725359 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 22:57:18.543084  725359 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:18.543128  725359 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 22:57:18.577174  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:18.620888  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 22:57:18.620921  725359 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 22:57:18.697204  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 22:57:18.697242  725359 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 22:57:18.810445  725359 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:18.810484  725359 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 22:57:18.885504  725359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:19.260717  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.108837823s)
	I1025 22:57:19.260766  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.096340939s)
	I1025 22:57:19.260787  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.260802  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.260807  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.260863  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.261282  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.261318  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.261344  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.261350  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.261372  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.261385  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.261441  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.261466  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.261484  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.261526  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.261902  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.261916  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.262246  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.263229  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.263251  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:19.290328  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:19.290366  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:19.290838  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:19.290847  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:19.290864  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.132386  725359 pod_ready.go:103] pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace has status "Ready":"False"
	I1025 22:57:20.242738  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.665512298s)
	I1025 22:57:20.242808  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.242828  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.243142  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.243200  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) DBG | Closing plugin on server side
	I1025 22:57:20.243217  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.243225  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.243238  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.243508  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.243530  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.243542  725359 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-166447"
	I1025 22:57:20.984026  725359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.098465183s)
	I1025 22:57:20.984079  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.984091  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.984421  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.984436  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.984444  725359 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:20.984451  725359 main.go:141] libmachine: (default-k8s-diff-port-166447) Calling .Close
	I1025 22:57:20.984739  725359 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:20.984761  725359 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:20.986558  725359 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-166447 addons enable metrics-server
	
	I1025 22:57:20.987567  725359 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1025 22:57:20.988902  725359 addons.go:510] duration metric: took 3.23862229s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1025 22:57:21.593090  725359 pod_ready.go:93] pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:21.593118  725359 pod_ready.go:82] duration metric: took 3.507254474s for pod "coredns-7c65d6cfc9-6fssw" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:21.593131  725359 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gq5mv" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:21.597786  725359 pod_ready.go:93] pod "coredns-7c65d6cfc9-gq5mv" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:21.597816  725359 pod_ready.go:82] duration metric: took 4.674133ms for pod "coredns-7c65d6cfc9-gq5mv" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:21.597830  725359 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:20.402145  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:20.402661  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:20.402722  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:20.402656  728395 retry.go:31] will retry after 3.412502046s: waiting for domain to come up
	I1025 22:57:23.818716  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:23.819208  728361 main.go:141] libmachine: (newest-cni-357495) DBG | unable to find current IP address of domain newest-cni-357495 in network mk-newest-cni-357495
	I1025 22:57:23.819237  728361 main.go:141] libmachine: (newest-cni-357495) DBG | I1025 22:57:23.819161  728395 retry.go:31] will retry after 4.418758048s: waiting for domain to come up
	I1025 22:57:20.249946  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:20.267883  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:20.267964  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:20.317028  726389 cri.go:89] found id: ""
	I1025 22:57:20.317071  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.317083  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:20.317092  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:20.317159  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:20.362449  726389 cri.go:89] found id: ""
	I1025 22:57:20.362481  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.362491  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:20.362497  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:20.362548  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:20.398308  726389 cri.go:89] found id: ""
	I1025 22:57:20.398348  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.398369  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:20.398377  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:20.398450  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:20.446702  726389 cri.go:89] found id: ""
	I1025 22:57:20.446731  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.446740  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:20.446746  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:20.446798  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:20.489776  726389 cri.go:89] found id: ""
	I1025 22:57:20.489809  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.489826  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:20.489833  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:20.489899  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:20.535387  726389 cri.go:89] found id: ""
	I1025 22:57:20.535415  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.535426  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:20.535442  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:20.535507  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:20.573433  726389 cri.go:89] found id: ""
	I1025 22:57:20.573467  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.573479  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:20.573487  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:20.573554  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:20.613584  726389 cri.go:89] found id: ""
	I1025 22:57:20.613619  726389 logs.go:282] 0 containers: []
	W1025 22:57:20.613631  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:20.613643  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:20.613664  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:20.675387  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:20.675426  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:20.691467  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:20.691513  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:20.813943  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:20.813975  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:20.813992  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:20.904974  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:20.905028  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:23.450429  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:23.464096  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:23.464169  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:23.500126  726389 cri.go:89] found id: ""
	I1025 22:57:23.500152  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.500161  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:23.500167  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:23.500220  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:23.534564  726389 cri.go:89] found id: ""
	I1025 22:57:23.534597  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.534608  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:23.534615  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:23.534666  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:23.577493  726389 cri.go:89] found id: ""
	I1025 22:57:23.577529  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.577541  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:23.577551  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:23.577679  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:23.616432  726389 cri.go:89] found id: ""
	I1025 22:57:23.616463  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.616474  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:23.616488  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:23.616553  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:23.655679  726389 cri.go:89] found id: ""
	I1025 22:57:23.655715  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.655727  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:23.655735  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:23.655804  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:23.695528  726389 cri.go:89] found id: ""
	I1025 22:57:23.695558  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.695570  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:23.695578  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:23.695642  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:23.734570  726389 cri.go:89] found id: ""
	I1025 22:57:23.734610  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.734622  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:23.734631  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:23.734703  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:23.774178  726389 cri.go:89] found id: ""
	I1025 22:57:23.774213  726389 logs.go:282] 0 containers: []
	W1025 22:57:23.774225  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:23.774238  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:23.774254  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:23.857347  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:23.857389  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:23.896130  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:23.896167  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:23.948276  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:23.948320  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:23.961809  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:23.961840  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:24.053746  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:23.604335  725359 pod_ready.go:103] pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"False"
	I1025 22:57:26.104577  725359 pod_ready.go:103] pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"False"
	I1025 22:57:26.613548  725359 pod_ready.go:93] pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.613571  725359 pod_ready.go:82] duration metric: took 5.015733422s for pod "etcd-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.613582  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.621883  725359 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.621908  725359 pod_ready.go:82] duration metric: took 8.319327ms for pod "kube-apiserver-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.621919  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.630956  725359 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.630981  725359 pod_ready.go:82] duration metric: took 9.055173ms for pod "kube-controller-manager-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.630994  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zqjjc" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.647393  725359 pod_ready.go:93] pod "kube-proxy-zqjjc" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.647428  725359 pod_ready.go:82] duration metric: took 16.426697ms for pod "kube-proxy-zqjjc" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.647440  725359 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.658038  725359 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-166447" in "kube-system" namespace has status "Ready":"True"
	I1025 22:57:26.658067  725359 pod_ready.go:82] duration metric: took 10.617453ms for pod "kube-scheduler-default-k8s-diff-port-166447" in "kube-system" namespace to be "Ready" ...
	I1025 22:57:26.658077  725359 pod_ready.go:39] duration metric: took 8.57922838s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 22:57:26.658096  725359 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:57:26.658162  725359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:26.705852  725359 api_server.go:72] duration metric: took 8.955782657s to wait for apiserver process to appear ...
	I1025 22:57:26.705882  725359 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:57:26.705909  725359 api_server.go:253] Checking apiserver healthz at https://192.168.61.249:8444/healthz ...
	I1025 22:57:26.712359  725359 api_server.go:279] https://192.168.61.249:8444/healthz returned 200:
	ok
	I1025 22:57:26.713354  725359 api_server.go:141] control plane version: v1.31.1
	I1025 22:57:26.713378  725359 api_server.go:131] duration metric: took 7.487484ms to wait for apiserver health ...
	I1025 22:57:26.713397  725359 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:57:26.809108  725359 system_pods.go:59] 9 kube-system pods found
	I1025 22:57:26.809156  725359 system_pods.go:61] "coredns-7c65d6cfc9-6fssw" [06188cce-77d0-4fdd-9769-98498d4912c2] Running
	I1025 22:57:26.809165  725359 system_pods.go:61] "coredns-7c65d6cfc9-gq5mv" [bd29ba17-303d-47b6-a34b-2045fe2b965d] Running
	I1025 22:57:26.809177  725359 system_pods.go:61] "etcd-default-k8s-diff-port-166447" [e2c29643-05fb-4424-81ee-cf258c0b7e95] Running
	I1025 22:57:26.809184  725359 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-166447" [1ffa7ab6-e0ca-484b-bc1a-eba3741e9e72] Running
	I1025 22:57:26.809191  725359 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-166447" [7f2e2ca6-75ff-43a5-9787-8cb35d659c95] Running
	I1025 22:57:26.809196  725359 system_pods.go:61] "kube-proxy-zqjjc" [144928e5-1a70-4f28-8c34-c5f8c11bf2d7] Running
	I1025 22:57:26.809203  725359 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-166447" [e5852078-0ead-45bb-9197-e1d06d125a43] Running
	I1025 22:57:26.809216  725359 system_pods.go:61] "metrics-server-6867b74b74-nvxph" [e0d75616-8015-4d30-b209-f68ff502d6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:26.809226  725359 system_pods.go:61] "storage-provisioner" [855c29fa-deba-4342-90eb-19c66fd7905f] Running
	I1025 22:57:26.809243  725359 system_pods.go:74] duration metric: took 95.838638ms to wait for pod list to return data ...
	I1025 22:57:26.809259  725359 default_sa.go:34] waiting for default service account to be created ...
	I1025 22:57:27.003062  725359 default_sa.go:45] found service account: "default"
	I1025 22:57:27.003103  725359 default_sa.go:55] duration metric: took 193.830229ms for default service account to be created ...
	I1025 22:57:27.003120  725359 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 22:57:27.206396  725359 system_pods.go:86] 9 kube-system pods found
	I1025 22:57:27.206438  725359 system_pods.go:89] "coredns-7c65d6cfc9-6fssw" [06188cce-77d0-4fdd-9769-98498d4912c2] Running
	I1025 22:57:27.206446  725359 system_pods.go:89] "coredns-7c65d6cfc9-gq5mv" [bd29ba17-303d-47b6-a34b-2045fe2b965d] Running
	I1025 22:57:27.206452  725359 system_pods.go:89] "etcd-default-k8s-diff-port-166447" [e2c29643-05fb-4424-81ee-cf258c0b7e95] Running
	I1025 22:57:27.206457  725359 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-166447" [1ffa7ab6-e0ca-484b-bc1a-eba3741e9e72] Running
	I1025 22:57:27.206463  725359 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-166447" [7f2e2ca6-75ff-43a5-9787-8cb35d659c95] Running
	I1025 22:57:27.206468  725359 system_pods.go:89] "kube-proxy-zqjjc" [144928e5-1a70-4f28-8c34-c5f8c11bf2d7] Running
	I1025 22:57:27.206473  725359 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-166447" [e5852078-0ead-45bb-9197-e1d06d125a43] Running
	I1025 22:57:27.206485  725359 system_pods.go:89] "metrics-server-6867b74b74-nvxph" [e0d75616-8015-4d30-b209-f68ff502d6d2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:27.206491  725359 system_pods.go:89] "storage-provisioner" [855c29fa-deba-4342-90eb-19c66fd7905f] Running
	I1025 22:57:27.206500  725359 system_pods.go:126] duration metric: took 203.373296ms to wait for k8s-apps to be running ...
	I1025 22:57:27.206511  725359 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 22:57:27.206568  725359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:57:27.236359  725359 system_svc.go:56] duration metric: took 29.835602ms WaitForService to wait for kubelet
	I1025 22:57:27.236401  725359 kubeadm.go:582] duration metric: took 9.486336184s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 22:57:27.236428  725359 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:57:27.404633  725359 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:57:27.404660  725359 node_conditions.go:123] node cpu capacity is 2
	I1025 22:57:27.404674  725359 node_conditions.go:105] duration metric: took 168.23879ms to run NodePressure ...
	I1025 22:57:27.404686  725359 start.go:241] waiting for startup goroutines ...
	I1025 22:57:27.404693  725359 start.go:246] waiting for cluster config update ...
	I1025 22:57:27.404704  725359 start.go:255] writing updated cluster config ...
	I1025 22:57:27.404950  725359 ssh_runner.go:195] Run: rm -f paused
	I1025 22:57:27.471713  725359 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 22:57:27.473904  725359 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-166447" cluster and "default" namespace by default
	I1025 22:57:28.242024  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.242494  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has current primary IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.242523  728361 main.go:141] libmachine: (newest-cni-357495) found domain IP: 192.168.72.113
	I1025 22:57:28.242535  728361 main.go:141] libmachine: (newest-cni-357495) reserving static IP address...
	I1025 22:57:28.242970  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "newest-cni-357495", mac: "52:54:00:fb:c0:76", ip: "192.168.72.113"} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.243000  728361 main.go:141] libmachine: (newest-cni-357495) DBG | skip adding static IP to network mk-newest-cni-357495 - found existing host DHCP lease matching {name: "newest-cni-357495", mac: "52:54:00:fb:c0:76", ip: "192.168.72.113"}
	I1025 22:57:28.243013  728361 main.go:141] libmachine: (newest-cni-357495) reserved static IP address 192.168.72.113 for domain newest-cni-357495
	I1025 22:57:28.243028  728361 main.go:141] libmachine: (newest-cni-357495) waiting for SSH...
	I1025 22:57:28.243042  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Getting to WaitForSSH function...
	I1025 22:57:28.245300  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.245651  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.245680  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.245811  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Using SSH client type: external
	I1025 22:57:28.245835  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Using SSH private key: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa (-rw-------)
	I1025 22:57:28.245865  728361 main.go:141] libmachine: (newest-cni-357495) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 22:57:28.245876  728361 main.go:141] libmachine: (newest-cni-357495) DBG | About to run SSH command:
	I1025 22:57:28.245886  728361 main.go:141] libmachine: (newest-cni-357495) DBG | exit 0
	I1025 22:57:28.377143  728361 main.go:141] libmachine: (newest-cni-357495) DBG | SSH cmd err, output: <nil>: 
	I1025 22:57:28.377542  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetConfigRaw
	I1025 22:57:28.378182  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:28.380998  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.381388  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.381422  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.381661  728361 profile.go:143] Saving config to /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/config.json ...
	I1025 22:57:28.382355  728361 machine.go:93] provisionDockerMachine start ...
	I1025 22:57:28.382383  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:28.382637  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.384883  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.385241  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.385266  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.385388  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.385550  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.385705  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.385873  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.386055  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:28.386295  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:28.386309  728361 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 22:57:28.489731  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 22:57:28.489766  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetMachineName
	I1025 22:57:28.490029  728361 buildroot.go:166] provisioning hostname "newest-cni-357495"
	I1025 22:57:28.490072  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetMachineName
	I1025 22:57:28.490223  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.493372  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.493804  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.493835  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.493978  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.494135  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.494278  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.494406  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.494585  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:28.494823  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:28.494850  728361 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-357495 && echo "newest-cni-357495" | sudo tee /etc/hostname
	I1025 22:57:28.612233  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-357495
	
	I1025 22:57:28.612271  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.615209  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.615542  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.615568  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.615802  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.616013  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.616211  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.616377  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.616605  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:28.616836  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:28.616860  728361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-357495' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-357495/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-357495' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 22:57:28.731112  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:57:28.731149  728361 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19758-661979/.minikube CaCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19758-661979/.minikube}
	I1025 22:57:28.731175  728361 buildroot.go:174] setting up certificates
	I1025 22:57:28.731189  728361 provision.go:84] configureAuth start
	I1025 22:57:28.731202  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetMachineName
	I1025 22:57:28.731508  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:28.734722  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.735105  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.735159  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.735349  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.737700  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.738025  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.738059  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.738280  728361 provision.go:143] copyHostCerts
	I1025 22:57:28.738356  728361 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem, removing ...
	I1025 22:57:28.738370  728361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem
	I1025 22:57:28.738437  728361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/key.pem (1675 bytes)
	I1025 22:57:28.738544  728361 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem, removing ...
	I1025 22:57:28.738551  728361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem
	I1025 22:57:28.738576  728361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/ca.pem (1082 bytes)
	I1025 22:57:28.738644  728361 exec_runner.go:144] found /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem, removing ...
	I1025 22:57:28.738652  728361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem
	I1025 22:57:28.738673  728361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19758-661979/.minikube/cert.pem (1123 bytes)
	I1025 22:57:28.738739  728361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem org=jenkins.newest-cni-357495 san=[127.0.0.1 192.168.72.113 localhost minikube newest-cni-357495]
	I1025 22:57:28.833704  728361 provision.go:177] copyRemoteCerts
	I1025 22:57:28.833762  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 22:57:28.833797  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:28.836780  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.837177  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:28.837208  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:28.837372  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:28.837573  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:28.837734  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:28.837863  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:28.922411  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 22:57:28.948328  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 22:57:28.976524  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 22:57:29.005619  728361 provision.go:87] duration metric: took 274.411907ms to configureAuth
	I1025 22:57:29.005654  728361 buildroot.go:189] setting minikube options for container-runtime
	I1025 22:57:29.005887  728361 config.go:182] Loaded profile config "newest-cni-357495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:29.005985  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:26.553979  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:26.567886  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:26.567964  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:26.603338  726389 cri.go:89] found id: ""
	I1025 22:57:26.603376  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.603389  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:26.603403  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:26.603475  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:26.637525  726389 cri.go:89] found id: ""
	I1025 22:57:26.637548  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.637556  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:26.637562  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:26.637609  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:26.672117  726389 cri.go:89] found id: ""
	I1025 22:57:26.672150  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.672159  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:26.672166  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:26.672230  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:26.705637  726389 cri.go:89] found id: ""
	I1025 22:57:26.705669  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.705681  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:26.705689  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:26.705762  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:26.759040  726389 cri.go:89] found id: ""
	I1025 22:57:26.759070  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.759084  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:26.759092  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:26.759161  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:26.811512  726389 cri.go:89] found id: ""
	I1025 22:57:26.811537  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.811547  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:26.811555  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:26.811641  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:26.851215  726389 cri.go:89] found id: ""
	I1025 22:57:26.851245  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.851256  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:26.851264  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:26.851330  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:26.884460  726389 cri.go:89] found id: ""
	I1025 22:57:26.884495  726389 logs.go:282] 0 containers: []
	W1025 22:57:26.884508  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:26.884520  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:26.884535  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:26.960048  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:26.960092  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:26.998588  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:26.998620  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:27.061646  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:27.061692  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:27.078350  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:27.078385  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:27.150478  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:29.009371  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.009852  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.009887  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.010056  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.010269  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.010451  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.010622  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.010818  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:29.010989  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:29.011004  728361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 22:57:29.235601  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 22:57:29.235655  728361 machine.go:96] duration metric: took 853.280404ms to provisionDockerMachine
	I1025 22:57:29.235672  728361 start.go:293] postStartSetup for "newest-cni-357495" (driver="kvm2")
	I1025 22:57:29.235694  728361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 22:57:29.235722  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.236076  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 22:57:29.236116  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.239049  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.239449  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.239482  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.239668  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.239889  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.240099  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.240319  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:29.327450  728361 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 22:57:29.331888  728361 info.go:137] Remote host: Buildroot 2023.02.9
	I1025 22:57:29.331921  728361 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/addons for local assets ...
	I1025 22:57:29.331987  728361 filesync.go:126] Scanning /home/jenkins/minikube-integration/19758-661979/.minikube/files for local assets ...
	I1025 22:57:29.332065  728361 filesync.go:149] local asset: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem -> 6691772.pem in /etc/ssl/certs
	I1025 22:57:29.332195  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 22:57:29.341892  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:57:29.367038  728361 start.go:296] duration metric: took 131.349254ms for postStartSetup
	I1025 22:57:29.367084  728361 fix.go:56] duration metric: took 20.2556649s for fixHost
	I1025 22:57:29.367106  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.369924  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.370255  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.370285  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.370425  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.370590  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.370745  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.370950  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.371124  728361 main.go:141] libmachine: Using SSH client type: native
	I1025 22:57:29.371304  728361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865fa0] 0x868c80 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I1025 22:57:29.371313  728361 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 22:57:29.474861  728361 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729897049.432427295
	
	I1025 22:57:29.474889  728361 fix.go:216] guest clock: 1729897049.432427295
	I1025 22:57:29.474899  728361 fix.go:229] Guest: 2024-10-25 22:57:29.432427295 +0000 UTC Remote: 2024-10-25 22:57:29.367088624 +0000 UTC m=+20.400142994 (delta=65.338671ms)
	I1025 22:57:29.474946  728361 fix.go:200] guest clock delta is within tolerance: 65.338671ms
	I1025 22:57:29.474960  728361 start.go:83] releasing machines lock for "newest-cni-357495", held for 20.363562153s
	I1025 22:57:29.474986  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.475248  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:29.478056  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.478406  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.478437  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.478628  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.479132  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.479319  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:29.479468  728361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 22:57:29.479506  728361 ssh_runner.go:195] Run: cat /version.json
	I1025 22:57:29.479527  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.479536  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:29.482531  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.482637  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.483074  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.483100  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.483131  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:29.483191  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:29.483471  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.483481  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:29.483652  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.483671  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:29.483931  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.483955  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:29.484103  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:29.484143  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:29.582367  728361 ssh_runner.go:195] Run: systemctl --version
	I1025 22:57:29.590693  728361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 22:57:29.745303  728361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 22:57:29.754423  728361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 22:57:29.754501  728361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 22:57:29.775617  728361 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 22:57:29.775648  728361 start.go:495] detecting cgroup driver to use...
	I1025 22:57:29.775747  728361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 22:57:29.799558  728361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 22:57:29.818705  728361 docker.go:217] disabling cri-docker service (if available) ...
	I1025 22:57:29.818806  728361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 22:57:29.833563  728361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 22:57:29.853630  728361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 22:57:29.983430  728361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 22:57:30.197267  728361 docker.go:233] disabling docker service ...
	I1025 22:57:30.197347  728361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 22:57:30.216012  728361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 22:57:30.230378  728361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 22:57:30.360555  728361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 22:57:30.484679  728361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 22:57:30.503208  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:57:30.523720  728361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1025 22:57:30.523795  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.535314  728361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 22:57:30.535383  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.546715  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.557826  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.569760  728361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 22:57:30.582722  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.593853  728361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.611448  728361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 22:57:30.622915  728361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 22:57:30.633073  728361 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1025 22:57:30.633147  728361 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1025 22:57:30.647230  728361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 22:57:30.657299  728361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:30.768765  728361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 22:57:30.854500  728361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 22:57:30.854590  728361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 22:57:30.859405  728361 start.go:563] Will wait 60s for crictl version
	I1025 22:57:30.859473  728361 ssh_runner.go:195] Run: which crictl
	I1025 22:57:30.863420  728361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 22:57:30.908862  728361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1025 22:57:30.908976  728361 ssh_runner.go:195] Run: crio --version
	I1025 22:57:30.939582  728361 ssh_runner.go:195] Run: crio --version
	I1025 22:57:30.978153  728361 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1025 22:57:30.979430  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetIP
	I1025 22:57:30.982243  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:30.982608  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:30.982641  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:30.982834  728361 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1025 22:57:30.988035  728361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:57:31.004301  728361 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 22:57:31.005441  728361 kubeadm.go:883] updating cluster {Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 22:57:31.005579  728361 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 22:57:31.005635  728361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:57:31.049853  728361 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1025 22:57:31.049928  728361 ssh_runner.go:195] Run: which lz4
	I1025 22:57:31.054174  728361 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 22:57:31.058473  728361 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 22:57:31.058505  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1025 22:57:32.497532  728361 crio.go:462] duration metric: took 1.44340372s to copy over tarball
	I1025 22:57:32.497637  728361 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 22:57:29.650805  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:29.664484  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:29.664563  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:29.706919  726389 cri.go:89] found id: ""
	I1025 22:57:29.706950  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.706961  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:29.706968  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:29.707032  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:29.748272  726389 cri.go:89] found id: ""
	I1025 22:57:29.748301  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.748313  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:29.748322  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:29.748383  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:29.783239  726389 cri.go:89] found id: ""
	I1025 22:57:29.783281  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.783303  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:29.783315  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:29.783381  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:29.828942  726389 cri.go:89] found id: ""
	I1025 22:57:29.829005  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.829021  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:29.829031  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:29.829112  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:29.874831  726389 cri.go:89] found id: ""
	I1025 22:57:29.874864  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.874876  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:29.874885  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:29.874950  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:29.920380  726389 cri.go:89] found id: ""
	I1025 22:57:29.920411  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.920422  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:29.920430  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:29.920495  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:29.964594  726389 cri.go:89] found id: ""
	I1025 22:57:29.964624  726389 logs.go:282] 0 containers: []
	W1025 22:57:29.964636  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:29.964643  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:29.964713  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:30.000416  726389 cri.go:89] found id: ""
	I1025 22:57:30.000449  726389 logs.go:282] 0 containers: []
	W1025 22:57:30.000461  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:30.000475  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:30.000500  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:30.073028  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:30.073055  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:30.073072  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:30.158430  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:30.158481  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:30.212493  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:30.212530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:30.289552  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:30.289606  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:32.808776  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:32.822039  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:32.822111  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:32.857007  726389 cri.go:89] found id: ""
	I1025 22:57:32.857042  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.857054  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:32.857063  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:32.857122  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:32.902015  726389 cri.go:89] found id: ""
	I1025 22:57:32.902045  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.902057  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:32.902066  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:32.902146  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:32.962252  726389 cri.go:89] found id: ""
	I1025 22:57:32.962287  726389 logs.go:282] 0 containers: []
	W1025 22:57:32.962299  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:32.962307  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:32.962381  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:33.010092  726389 cri.go:89] found id: ""
	I1025 22:57:33.010129  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.010140  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:33.010149  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:33.010219  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:33.057453  726389 cri.go:89] found id: ""
	I1025 22:57:33.057482  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.057492  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:33.057499  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:33.057618  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:33.096991  726389 cri.go:89] found id: ""
	I1025 22:57:33.097024  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.097035  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:33.097042  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:33.097092  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:33.130710  726389 cri.go:89] found id: ""
	I1025 22:57:33.130740  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.130751  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:33.130759  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:33.130820  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:33.169440  726389 cri.go:89] found id: ""
	I1025 22:57:33.169479  726389 logs.go:282] 0 containers: []
	W1025 22:57:33.169491  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:33.169505  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:33.169520  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:33.249558  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:33.249586  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:33.249603  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:33.364568  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:33.364613  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:33.415233  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:33.415264  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:33.472943  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:33.473014  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:34.612317  728361 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.11464276s)
	I1025 22:57:34.612352  728361 crio.go:469] duration metric: took 2.114771262s to extract the tarball
	I1025 22:57:34.612363  728361 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 22:57:34.651878  728361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 22:57:34.694439  728361 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 22:57:34.694463  728361 cache_images.go:84] Images are preloaded, skipping loading
	I1025 22:57:34.694472  728361 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.31.1 crio true true} ...
	I1025 22:57:34.694604  728361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-357495 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 22:57:34.694677  728361 ssh_runner.go:195] Run: crio config
	I1025 22:57:34.748152  728361 cni.go:84] Creating CNI manager for ""
	I1025 22:57:34.748178  728361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:34.748189  728361 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1025 22:57:34.748215  728361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-357495 NodeName:newest-cni-357495 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 22:57:34.748372  728361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-357495"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.113"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 22:57:34.748437  728361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1025 22:57:34.760143  728361 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 22:57:34.760202  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 22:57:34.771582  728361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1025 22:57:34.787944  728361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 22:57:34.804113  728361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1025 22:57:34.820688  728361 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I1025 22:57:34.824565  728361 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:57:34.837134  728361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:34.952711  728361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:57:34.970911  728361 certs.go:68] Setting up /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495 for IP: 192.168.72.113
	I1025 22:57:34.970937  728361 certs.go:194] generating shared ca certs ...
	I1025 22:57:34.970959  728361 certs.go:226] acquiring lock for ca certs: {Name:mkb3c0b02dcf77d22d8e250175d24731a46db4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:34.971160  728361 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key
	I1025 22:57:34.971239  728361 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key
	I1025 22:57:34.971254  728361 certs.go:256] generating profile certs ...
	I1025 22:57:34.971378  728361 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/client.key
	I1025 22:57:34.971475  728361 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/apiserver.key.03300bc5
	I1025 22:57:34.971536  728361 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/proxy-client.key
	I1025 22:57:34.971687  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem (1338 bytes)
	W1025 22:57:34.971735  728361 certs.go:480] ignoring /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177_empty.pem, impossibly tiny 0 bytes
	I1025 22:57:34.971748  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 22:57:34.971781  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/ca.pem (1082 bytes)
	I1025 22:57:34.971814  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/cert.pem (1123 bytes)
	I1025 22:57:34.971845  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/certs/key.pem (1675 bytes)
	I1025 22:57:34.971898  728361 certs.go:484] found cert: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem (1708 bytes)
	I1025 22:57:34.972920  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 22:57:35.035802  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 22:57:35.066849  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 22:57:35.095746  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 22:57:35.122667  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 22:57:35.152086  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 22:57:35.178215  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 22:57:35.201152  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/newest-cni-357495/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 22:57:35.225276  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 22:57:35.247950  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/certs/669177.pem --> /usr/share/ca-certificates/669177.pem (1338 bytes)
	I1025 22:57:35.273680  728361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/ssl/certs/6691772.pem --> /usr/share/ca-certificates/6691772.pem (1708 bytes)
	I1025 22:57:35.297790  728361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 22:57:35.314273  728361 ssh_runner.go:195] Run: openssl version
	I1025 22:57:35.319977  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 22:57:35.332531  728361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:57:35.337386  728361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:36 /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:57:35.337435  728361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:57:35.343239  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 22:57:35.354526  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/669177.pem && ln -fs /usr/share/ca-certificates/669177.pem /etc/ssl/certs/669177.pem"
	I1025 22:57:35.364927  728361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/669177.pem
	I1025 22:57:35.369254  728361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:46 /usr/share/ca-certificates/669177.pem
	I1025 22:57:35.369307  728361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/669177.pem
	I1025 22:57:35.375175  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/669177.pem /etc/ssl/certs/51391683.0"
	I1025 22:57:35.386699  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6691772.pem && ln -fs /usr/share/ca-certificates/6691772.pem /etc/ssl/certs/6691772.pem"
	I1025 22:57:35.397181  728361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6691772.pem
	I1025 22:57:35.401747  728361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:46 /usr/share/ca-certificates/6691772.pem
	I1025 22:57:35.401797  728361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6691772.pem
	I1025 22:57:35.407254  728361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6691772.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 22:57:35.417716  728361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 22:57:35.422134  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 22:57:35.428825  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 22:57:35.435416  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 22:57:35.441327  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 22:57:35.446978  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 22:57:35.452887  728361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 22:57:35.458800  728361 kubeadm.go:392] StartCluster: {Name:newest-cni-357495 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-357495 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 22:57:35.458907  728361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 22:57:35.458975  728361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:57:35.508107  728361 cri.go:89] found id: ""
	I1025 22:57:35.508190  728361 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 22:57:35.518730  728361 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 22:57:35.518756  728361 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 22:57:35.518812  728361 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 22:57:35.528709  728361 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:57:35.529470  728361 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-357495" does not appear in /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:35.529808  728361 kubeconfig.go:62] /home/jenkins/minikube-integration/19758-661979/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-357495" cluster setting kubeconfig missing "newest-cni-357495" context setting]
	I1025 22:57:35.530280  728361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:35.531821  728361 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 22:57:35.541383  728361 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.113
	I1025 22:57:35.541408  728361 kubeadm.go:1160] stopping kube-system containers ...
	I1025 22:57:35.541426  728361 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 22:57:35.541475  728361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 22:57:35.581588  728361 cri.go:89] found id: ""
	I1025 22:57:35.581670  728361 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 22:57:35.597329  728361 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:57:35.606992  728361 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:57:35.607032  728361 kubeadm.go:157] found existing configuration files:
	
	I1025 22:57:35.607078  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:57:35.616052  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:57:35.616100  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:57:35.625202  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:57:35.634016  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:57:35.634060  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:57:35.643656  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:57:35.654009  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:57:35.654059  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:57:35.664119  728361 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:57:35.673468  728361 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:57:35.673524  728361 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:57:35.683499  728361 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:57:35.693207  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:35.800242  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:36.661671  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:36.883048  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:36.950556  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:37.060335  728361 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:57:37.060456  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:37.560722  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:38.061291  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:38.560646  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:35.989111  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:36.002822  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:36.002901  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:36.042325  726389 cri.go:89] found id: ""
	I1025 22:57:36.042362  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.042373  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:36.042381  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:36.042446  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:36.083924  726389 cri.go:89] found id: ""
	I1025 22:57:36.083957  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.083968  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:36.083976  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:36.084047  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:36.117475  726389 cri.go:89] found id: ""
	I1025 22:57:36.117511  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.117523  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:36.117531  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:36.117592  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:36.151851  726389 cri.go:89] found id: ""
	I1025 22:57:36.151888  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.151901  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:36.151909  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:36.151975  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:36.188798  726389 cri.go:89] found id: ""
	I1025 22:57:36.188825  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.188837  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:36.188845  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:36.188905  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:36.222491  726389 cri.go:89] found id: ""
	I1025 22:57:36.222532  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.222544  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:36.222555  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:36.222621  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:36.257481  726389 cri.go:89] found id: ""
	I1025 22:57:36.257511  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.257520  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:36.257527  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:36.257580  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:36.291774  726389 cri.go:89] found id: ""
	I1025 22:57:36.291805  726389 logs.go:282] 0 containers: []
	W1025 22:57:36.291817  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:36.291829  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:36.291845  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:36.341240  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:36.341288  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:36.355280  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:36.355312  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:36.420727  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:36.420756  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:36.420770  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:36.496896  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:36.496943  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:39.035530  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:39.053640  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:39.053721  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:39.095892  726389 cri.go:89] found id: ""
	I1025 22:57:39.095924  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.095936  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:39.095945  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:39.096010  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:39.135571  726389 cri.go:89] found id: ""
	I1025 22:57:39.135603  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.135614  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:39.135621  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:39.135680  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:39.174481  726389 cri.go:89] found id: ""
	I1025 22:57:39.174517  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.174530  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:39.174539  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:39.174597  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:39.214453  726389 cri.go:89] found id: ""
	I1025 22:57:39.214488  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.214505  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:39.214512  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:39.214565  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:39.251084  726389 cri.go:89] found id: ""
	I1025 22:57:39.251111  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.251119  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:39.251126  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:39.251186  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:39.292067  726389 cri.go:89] found id: ""
	I1025 22:57:39.292098  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.292108  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:39.292117  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:39.292183  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:39.331918  726389 cri.go:89] found id: ""
	I1025 22:57:39.331953  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.331964  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:39.331972  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:39.332032  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:39.366300  726389 cri.go:89] found id: ""
	I1025 22:57:39.366334  726389 logs.go:282] 0 containers: []
	W1025 22:57:39.366346  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:39.366358  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:39.366373  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:39.451297  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:39.451344  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:39.492655  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:39.492695  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:39.551959  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:39.552004  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:39.565900  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:39.565934  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:39.637894  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:39.061158  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:39.083761  728361 api_server.go:72] duration metric: took 2.023424888s to wait for apiserver process to appear ...
	I1025 22:57:39.083795  728361 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:57:39.083833  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:39.084432  728361 api_server.go:269] stopped: https://192.168.72.113:8443/healthz: Get "https://192.168.72.113:8443/healthz": dial tcp 192.168.72.113:8443: connect: connection refused
	I1025 22:57:39.584481  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:41.830058  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 22:57:41.830086  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 22:57:41.830102  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:41.851621  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 22:57:41.851664  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 22:57:42.083965  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:42.098809  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:57:42.098843  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:57:42.583931  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:42.595538  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:57:42.595610  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:57:43.084096  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:43.099317  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 22:57:43.099347  728361 api_server.go:103] status: https://192.168.72.113:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 22:57:43.583916  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:43.588837  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I1025 22:57:43.595393  728361 api_server.go:141] control plane version: v1.31.1
	I1025 22:57:43.595419  728361 api_server.go:131] duration metric: took 4.511617345s to wait for apiserver health ...
	I1025 22:57:43.595430  728361 cni.go:84] Creating CNI manager for ""
	I1025 22:57:43.595436  728361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 22:57:43.597362  728361 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 22:57:43.598677  728361 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 22:57:43.611172  728361 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 22:57:43.628657  728361 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:57:43.639416  728361 system_pods.go:59] 8 kube-system pods found
	I1025 22:57:43.639446  728361 system_pods.go:61] "coredns-7c65d6cfc9-gdn8b" [d930c360-1ecf-4a30-be3c-f3159e9e3c54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:57:43.639454  728361 system_pods.go:61] "etcd-newest-cni-357495" [d1e1ea54-c661-46f4-90ed-a8681b837c68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 22:57:43.639466  728361 system_pods.go:61] "kube-apiserver-newest-cni-357495" [e7bbb737-5813-4e39-bf0c-e51250663d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:57:43.639477  728361 system_pods.go:61] "kube-controller-manager-newest-cni-357495" [486c74ca-7ca3-4816-ab43-890d29b4face] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:57:43.639487  728361 system_pods.go:61] "kube-proxy-tmpdb" [406e84ac-bc0c-4eda-a0de-1417d866649f] Running
	I1025 22:57:43.639495  728361 system_pods.go:61] "kube-scheduler-newest-cni-357495" [50c6e5ea-ee14-477e-a94b-8284615777bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:57:43.639505  728361 system_pods.go:61] "metrics-server-6867b74b74-w9dvz" [fee8b8bd-7da6-4b8b-9804-67888b1fc868] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:43.639512  728361 system_pods.go:61] "storage-provisioner" [35061304-9e5a-4f66-875c-103f43be807a] Running
	I1025 22:57:43.639518  728361 system_pods.go:74] duration metric: took 10.839818ms to wait for pod list to return data ...
	I1025 22:57:43.639528  728361 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:57:43.646484  728361 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:57:43.646509  728361 node_conditions.go:123] node cpu capacity is 2
	I1025 22:57:43.646520  728361 node_conditions.go:105] duration metric: took 6.988285ms to run NodePressure ...
	I1025 22:57:43.646539  728361 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 22:57:43.915625  728361 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 22:57:43.934000  728361 ops.go:34] apiserver oom_adj: -16
	I1025 22:57:43.934020  728361 kubeadm.go:597] duration metric: took 8.415258105s to restartPrimaryControlPlane
	I1025 22:57:43.934029  728361 kubeadm.go:394] duration metric: took 8.475239856s to StartCluster
	I1025 22:57:43.934049  728361 settings.go:142] acquiring lock: {Name:mkd621bea53936781f4287299dc1be4fac85fd74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:43.934116  728361 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:57:43.935164  728361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19758-661979/kubeconfig: {Name:mkac1903951dd236254db3b0806fe78a0657cee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:57:43.935405  728361 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 22:57:43.935533  728361 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 22:57:43.935636  728361 config.go:182] Loaded profile config "newest-cni-357495": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:57:43.935668  728361 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-357495"
	I1025 22:57:43.935696  728361 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-357495"
	W1025 22:57:43.935713  728361 addons.go:243] addon storage-provisioner should already be in state true
	I1025 22:57:43.935727  728361 addons.go:69] Setting metrics-server=true in profile "newest-cni-357495"
	I1025 22:57:43.935749  728361 addons.go:234] Setting addon metrics-server=true in "newest-cni-357495"
	I1025 22:57:43.935753  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	W1025 22:57:43.935763  728361 addons.go:243] addon metrics-server should already be in state true
	I1025 22:57:43.935818  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	I1025 22:57:43.936205  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.936245  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.936283  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.935703  728361 addons.go:69] Setting default-storageclass=true in profile "newest-cni-357495"
	I1025 22:57:43.936320  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.936321  728361 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-357495"
	I1025 22:57:43.935713  728361 addons.go:69] Setting dashboard=true in profile "newest-cni-357495"
	I1025 22:57:43.936591  728361 addons.go:234] Setting addon dashboard=true in "newest-cni-357495"
	W1025 22:57:43.936602  728361 addons.go:243] addon dashboard should already be in state true
	I1025 22:57:43.936637  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	I1025 22:57:43.936834  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.936873  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.937009  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.937048  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.937659  728361 out.go:177] * Verifying Kubernetes components...
	I1025 22:57:43.939144  728361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:57:43.955960  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42077
	I1025 22:57:43.956461  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.956979  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.957007  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.957063  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40771
	I1025 22:57:43.957440  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.957472  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.957898  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.957919  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.958078  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.958127  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.958280  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.958921  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.958970  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.960741  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35805
	I1025 22:57:43.961123  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.961708  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.961724  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.962094  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.962267  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:43.965281  728361 addons.go:234] Setting addon default-storageclass=true in "newest-cni-357495"
	W1025 22:57:43.965301  728361 addons.go:243] addon default-storageclass should already be in state true
	I1025 22:57:43.965333  728361 host.go:66] Checking if "newest-cni-357495" exists ...
	I1025 22:57:43.965612  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.965651  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.967851  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40677
	I1025 22:57:43.968252  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.968859  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.968877  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.969297  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.969895  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:43.969938  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:43.978224  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37967
	I1025 22:57:43.980247  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43065
	I1025 22:57:43.991129  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I1025 22:57:43.997786  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.997926  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.998540  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.998565  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.998646  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:43.998705  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.998729  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.998995  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.999070  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:43.999305  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:43.999365  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:43.999543  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:43.999565  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:43.999954  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:44.000573  728361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:57:44.000731  728361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:57:44.001562  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.002141  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.003847  728361 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1025 22:57:44.005301  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 22:57:44.005326  728361 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 22:57:44.005353  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.008444  728361 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 22:57:44.009433  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.009938  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.009962  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.010211  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.010419  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.010565  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.010672  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.014136  728361 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:44.014160  728361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 22:57:44.014183  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.017633  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.018066  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.018084  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.018360  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.018538  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.018671  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.018843  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.024748  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33567
	I1025 22:57:44.025455  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:44.025952  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:44.025974  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:44.027949  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:44.028345  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:44.030416  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.030592  728361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44143
	I1025 22:57:44.030623  728361 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:44.030636  728361 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 22:57:44.030653  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.031671  728361 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:57:44.032355  728361 main.go:141] libmachine: Using API Version  1
	I1025 22:57:44.032380  728361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:57:44.033013  728361 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:57:44.033268  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetState
	I1025 22:57:44.034055  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.034580  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.034604  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.034914  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.035097  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.035108  728361 main.go:141] libmachine: (newest-cni-357495) Calling .DriverName
	I1025 22:57:44.035257  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.035424  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.037146  728361 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 22:57:44.038544  728361 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 22:57:42.138727  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:42.152525  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:42.152616  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:42.190900  726389 cri.go:89] found id: ""
	I1025 22:57:42.190935  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.190947  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:42.190955  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:42.191043  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:42.237668  726389 cri.go:89] found id: ""
	I1025 22:57:42.237698  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.237711  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:42.237720  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:42.237781  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:42.289049  726389 cri.go:89] found id: ""
	I1025 22:57:42.289077  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.289087  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:42.289096  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:42.289155  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:42.334276  726389 cri.go:89] found id: ""
	I1025 22:57:42.334306  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.334318  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:42.334327  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:42.334385  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:42.379295  726389 cri.go:89] found id: ""
	I1025 22:57:42.379317  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.379325  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:42.379331  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:42.379375  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:42.416452  726389 cri.go:89] found id: ""
	I1025 22:57:42.416484  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.416496  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:42.416504  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:42.416563  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:42.455290  726389 cri.go:89] found id: ""
	I1025 22:57:42.455324  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.455336  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:42.455352  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:42.455421  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:42.493367  726389 cri.go:89] found id: ""
	I1025 22:57:42.493396  726389 logs.go:282] 0 containers: []
	W1025 22:57:42.493413  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:42.493426  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:42.493444  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:42.511673  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:42.511724  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:42.589951  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:42.589976  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:42.589994  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:42.697460  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:42.697498  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:42.757645  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:42.757672  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:44.039861  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 22:57:44.039876  728361 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 22:57:44.039902  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHHostname
	I1025 22:57:44.043936  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.044280  728361 main.go:141] libmachine: (newest-cni-357495) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:c0:76", ip: ""} in network mk-newest-cni-357495: {Iface:virbr4 ExpiryTime:2024-10-25 23:57:20 +0000 UTC Type:0 Mac:52:54:00:fb:c0:76 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:newest-cni-357495 Clientid:01:52:54:00:fb:c0:76}
	I1025 22:57:44.044300  728361 main.go:141] libmachine: (newest-cni-357495) DBG | domain newest-cni-357495 has defined IP address 192.168.72.113 and MAC address 52:54:00:fb:c0:76 in network mk-newest-cni-357495
	I1025 22:57:44.044646  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHPort
	I1025 22:57:44.044847  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHKeyPath
	I1025 22:57:44.045047  728361 main.go:141] libmachine: (newest-cni-357495) Calling .GetSSHUsername
	I1025 22:57:44.045212  728361 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/newest-cni-357495/id_rsa Username:docker}
	I1025 22:57:44.214968  728361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 22:57:44.230045  728361 api_server.go:52] waiting for apiserver process to appear ...
	I1025 22:57:44.230142  728361 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:44.256130  728361 api_server.go:72] duration metric: took 320.677383ms to wait for apiserver process to appear ...
	I1025 22:57:44.256168  728361 api_server.go:88] waiting for apiserver healthz status ...
	I1025 22:57:44.256195  728361 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I1025 22:57:44.261782  728361 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I1025 22:57:44.262769  728361 api_server.go:141] control plane version: v1.31.1
	I1025 22:57:44.262792  728361 api_server.go:131] duration metric: took 6.616839ms to wait for apiserver health ...
	I1025 22:57:44.262808  728361 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 22:57:44.268736  728361 system_pods.go:59] 8 kube-system pods found
	I1025 22:57:44.268771  728361 system_pods.go:61] "coredns-7c65d6cfc9-gdn8b" [d930c360-1ecf-4a30-be3c-f3159e9e3c54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 22:57:44.268782  728361 system_pods.go:61] "etcd-newest-cni-357495" [d1e1ea54-c661-46f4-90ed-a8681b837c68] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 22:57:44.268794  728361 system_pods.go:61] "kube-apiserver-newest-cni-357495" [e7bbb737-5813-4e39-bf0c-e51250663d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 22:57:44.268802  728361 system_pods.go:61] "kube-controller-manager-newest-cni-357495" [486c74ca-7ca3-4816-ab43-890d29b4face] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 22:57:44.268811  728361 system_pods.go:61] "kube-proxy-tmpdb" [406e84ac-bc0c-4eda-a0de-1417d866649f] Running
	I1025 22:57:44.268824  728361 system_pods.go:61] "kube-scheduler-newest-cni-357495" [50c6e5ea-ee14-477e-a94b-8284615777bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 22:57:44.268835  728361 system_pods.go:61] "metrics-server-6867b74b74-w9dvz" [fee8b8bd-7da6-4b8b-9804-67888b1fc868] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:57:44.268844  728361 system_pods.go:61] "storage-provisioner" [35061304-9e5a-4f66-875c-103f43be807a] Running
	I1025 22:57:44.268853  728361 system_pods.go:74] duration metric: took 6.033238ms to wait for pod list to return data ...
	I1025 22:57:44.268865  728361 default_sa.go:34] waiting for default service account to be created ...
	I1025 22:57:44.274413  728361 default_sa.go:45] found service account: "default"
	I1025 22:57:44.274435  728361 default_sa.go:55] duration metric: took 5.560777ms for default service account to be created ...
	I1025 22:57:44.274448  728361 kubeadm.go:582] duration metric: took 339.005004ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 22:57:44.274466  728361 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:57:44.276931  728361 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1025 22:57:44.276950  728361 node_conditions.go:123] node cpu capacity is 2
	I1025 22:57:44.276977  728361 node_conditions.go:105] duration metric: took 2.502915ms to run NodePressure ...
	I1025 22:57:44.276992  728361 start.go:241] waiting for startup goroutines ...
	I1025 22:57:44.300122  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 22:57:44.327780  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 22:57:44.327815  728361 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 22:57:44.334907  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 22:57:44.334936  728361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 22:57:44.365482  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 22:57:44.365518  728361 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 22:57:44.376945  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 22:57:44.441691  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 22:57:44.441722  728361 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 22:57:44.443225  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 22:57:44.443247  728361 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 22:57:44.510983  728361 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:44.511014  728361 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 22:57:44.522596  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 22:57:44.522631  728361 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 22:57:44.593578  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 22:57:44.600368  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 22:57:44.600392  728361 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 22:57:44.687614  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 22:57:44.687642  728361 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 22:57:44.726363  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 22:57:44.726391  728361 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 22:57:44.771220  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 22:57:44.771247  728361 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 22:57:44.800050  728361 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:44.800079  728361 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 22:57:44.875738  728361 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 22:57:46.117050  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816877105s)
	I1025 22:57:46.117115  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.740124565s)
	I1025 22:57:46.117165  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117185  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117211  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.52359958s)
	I1025 22:57:46.117120  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117287  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117247  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117367  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117495  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.117543  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.117552  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.117560  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117567  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117623  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.117642  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.117663  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.117671  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.117679  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117687  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.117713  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.117739  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.117751  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.117767  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.120140  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.120155  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.120155  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.120172  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.120168  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.120191  728361 addons.go:475] Verifying addon metrics-server=true in "newest-cni-357495"
	I1025 22:57:46.120226  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.120252  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.120604  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.120614  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.137578  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.137598  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.137943  728361 main.go:141] libmachine: (newest-cni-357495) DBG | Closing plugin on server side
	I1025 22:57:46.137945  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.137973  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.545157  728361 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.669353935s)
	I1025 22:57:46.545231  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.545247  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.545621  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.545660  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.545679  728361 main.go:141] libmachine: Making call to close driver server
	I1025 22:57:46.545693  728361 main.go:141] libmachine: (newest-cni-357495) Calling .Close
	I1025 22:57:46.545954  728361 main.go:141] libmachine: Successfully made call to close driver server
	I1025 22:57:46.545969  728361 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 22:57:46.547693  728361 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-357495 addons enable metrics-server
	
	I1025 22:57:46.549219  728361 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I1025 22:57:46.550703  728361 addons.go:510] duration metric: took 2.615173183s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I1025 22:57:46.550752  728361 start.go:246] waiting for cluster config update ...
	I1025 22:57:46.550768  728361 start.go:255] writing updated cluster config ...
	I1025 22:57:46.551105  728361 ssh_runner.go:195] Run: rm -f paused
	I1025 22:57:46.603794  728361 start.go:600] kubectl: 1.31.2, cluster: 1.31.1 (minor skew: 0)
	I1025 22:57:46.605589  728361 out.go:177] * Done! kubectl is now configured to use "newest-cni-357495" cluster and "default" namespace by default
	I1025 22:57:45.312071  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:45.325800  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:45.325881  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:45.370543  726389 cri.go:89] found id: ""
	I1025 22:57:45.370572  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.370582  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:45.370590  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:45.370659  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:45.411970  726389 cri.go:89] found id: ""
	I1025 22:57:45.412009  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.412022  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:45.412032  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:45.412099  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:45.445037  726389 cri.go:89] found id: ""
	I1025 22:57:45.445073  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.445085  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:45.445094  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:45.445158  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:45.483563  726389 cri.go:89] found id: ""
	I1025 22:57:45.483595  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.483607  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:45.483615  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:45.483683  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:45.522944  726389 cri.go:89] found id: ""
	I1025 22:57:45.522978  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.522991  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:45.522999  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:45.523060  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:45.558055  726389 cri.go:89] found id: ""
	I1025 22:57:45.558086  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.558099  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:45.558107  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:45.558172  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:45.591533  726389 cri.go:89] found id: ""
	I1025 22:57:45.591564  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.591574  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:45.591581  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:45.591651  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:45.634951  726389 cri.go:89] found id: ""
	I1025 22:57:45.634985  726389 logs.go:282] 0 containers: []
	W1025 22:57:45.634996  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:45.635009  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:45.635026  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:45.684807  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:45.684847  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:45.699038  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:45.699072  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:45.762687  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:45.762718  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:45.762736  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:45.851222  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:45.851265  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:48.389992  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:48.403774  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:48.403842  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:48.441883  726389 cri.go:89] found id: ""
	I1025 22:57:48.441908  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.441919  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:48.441929  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:48.441982  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:48.477527  726389 cri.go:89] found id: ""
	I1025 22:57:48.477550  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.477558  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:48.477564  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:48.477612  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:48.514457  726389 cri.go:89] found id: ""
	I1025 22:57:48.514489  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.514500  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:48.514510  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:48.514579  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:48.551264  726389 cri.go:89] found id: ""
	I1025 22:57:48.551296  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.551306  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:48.551312  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:48.551369  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:48.585426  726389 cri.go:89] found id: ""
	I1025 22:57:48.585454  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.585465  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:48.585473  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:48.585537  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:48.623734  726389 cri.go:89] found id: ""
	I1025 22:57:48.623772  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.623785  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:48.623794  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:48.623865  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:48.661170  726389 cri.go:89] found id: ""
	I1025 22:57:48.661207  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.661219  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:48.661227  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:48.661304  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:48.700776  726389 cri.go:89] found id: ""
	I1025 22:57:48.700803  726389 logs.go:282] 0 containers: []
	W1025 22:57:48.700812  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:48.700825  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:48.700842  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:48.753294  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:48.753326  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:48.770412  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:48.770443  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:48.847535  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:48.847562  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:48.847577  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:48.920817  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:48.920862  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:51.460695  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:51.473870  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:51.473945  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:51.510350  726389 cri.go:89] found id: ""
	I1025 22:57:51.510383  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.510393  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:51.510406  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:51.510480  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:51.546705  726389 cri.go:89] found id: ""
	I1025 22:57:51.546742  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.546754  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:51.546762  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:51.546830  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:51.583728  726389 cri.go:89] found id: ""
	I1025 22:57:51.583759  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.583767  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:51.583774  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:51.583831  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:51.623229  726389 cri.go:89] found id: ""
	I1025 22:57:51.623260  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.623269  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:51.623275  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:51.623332  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:51.661673  726389 cri.go:89] found id: ""
	I1025 22:57:51.661700  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.661710  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:51.661716  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:51.661769  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:51.707516  726389 cri.go:89] found id: ""
	I1025 22:57:51.707551  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.707564  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:51.707572  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:51.707646  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:51.745242  726389 cri.go:89] found id: ""
	I1025 22:57:51.745277  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.745288  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:51.745295  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:51.745360  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:51.778136  726389 cri.go:89] found id: ""
	I1025 22:57:51.778165  726389 logs.go:282] 0 containers: []
	W1025 22:57:51.778180  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:51.778193  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:51.778210  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:51.826323  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:51.826365  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:51.839635  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:51.839673  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:51.905218  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:51.905242  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:51.905260  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:51.979641  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:51.979680  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:54.519362  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:54.532482  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:54.532560  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:54.566193  726389 cri.go:89] found id: ""
	I1025 22:57:54.566221  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.566232  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:54.566240  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:54.566304  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:54.602139  726389 cri.go:89] found id: ""
	I1025 22:57:54.602166  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.602178  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:54.602187  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:54.602245  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:54.636484  726389 cri.go:89] found id: ""
	I1025 22:57:54.636519  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.636529  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:54.636545  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:54.636610  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:54.670617  726389 cri.go:89] found id: ""
	I1025 22:57:54.670649  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.670660  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:54.670666  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:54.670726  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:54.702360  726389 cri.go:89] found id: ""
	I1025 22:57:54.702400  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.702412  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:54.702420  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:54.702491  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:54.736101  726389 cri.go:89] found id: ""
	I1025 22:57:54.736140  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.736153  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:54.736161  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:54.736225  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:54.768706  726389 cri.go:89] found id: ""
	I1025 22:57:54.768744  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.768757  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:54.768766  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:54.768828  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:54.800919  726389 cri.go:89] found id: ""
	I1025 22:57:54.800965  726389 logs.go:282] 0 containers: []
	W1025 22:57:54.800978  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:54.800989  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:54.801008  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:54.866242  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:54.866277  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:54.866294  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:54.942084  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:54.942127  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:57:54.979383  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:54.979422  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:55.029227  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:55.029269  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:57.543312  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:57:57.557090  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:57:57.557176  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:57:57.594813  726389 cri.go:89] found id: ""
	I1025 22:57:57.594847  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.594860  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:57:57.594868  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:57:57.594933  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:57:57.629736  726389 cri.go:89] found id: ""
	I1025 22:57:57.629769  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.629781  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:57:57.629790  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:57:57.629855  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:57:57.663895  726389 cri.go:89] found id: ""
	I1025 22:57:57.663927  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.663935  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:57:57.663940  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:57:57.663991  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:57:57.696122  726389 cri.go:89] found id: ""
	I1025 22:57:57.696153  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.696164  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:57:57.696171  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:57:57.696238  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:57:57.733740  726389 cri.go:89] found id: ""
	I1025 22:57:57.733769  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.733778  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:57:57.733785  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:57:57.733839  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:57:57.766855  726389 cri.go:89] found id: ""
	I1025 22:57:57.766886  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.766897  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:57:57.766905  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:57:57.766971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:57:57.804080  726389 cri.go:89] found id: ""
	I1025 22:57:57.804110  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.804118  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:57:57.804125  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:57:57.804178  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:57:57.837482  726389 cri.go:89] found id: ""
	I1025 22:57:57.837511  726389 logs.go:282] 0 containers: []
	W1025 22:57:57.837520  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:57:57.837530  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:57:57.837542  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:57:57.889217  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:57:57.889265  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:57:57.902999  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:57:57.903039  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:57:57.968303  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:57:57.968327  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:57:57.968345  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:57:58.046929  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:57:58.046981  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:00.589410  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:00.602271  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:00.602344  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:00.635947  726389 cri.go:89] found id: ""
	I1025 22:58:00.635980  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.635989  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:00.635995  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:00.636057  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:00.668039  726389 cri.go:89] found id: ""
	I1025 22:58:00.668072  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.668083  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:00.668092  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:00.668163  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:00.700889  726389 cri.go:89] found id: ""
	I1025 22:58:00.700916  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.700925  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:00.700931  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:00.701026  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:00.734409  726389 cri.go:89] found id: ""
	I1025 22:58:00.734440  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.734452  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:00.734459  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:00.734527  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:00.770435  726389 cri.go:89] found id: ""
	I1025 22:58:00.770462  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.770469  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:00.770476  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:00.770535  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:00.803431  726389 cri.go:89] found id: ""
	I1025 22:58:00.803466  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.803477  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:00.803486  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:00.803552  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:00.837896  726389 cri.go:89] found id: ""
	I1025 22:58:00.837932  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.837943  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:00.837951  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:00.838025  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:00.875375  726389 cri.go:89] found id: ""
	I1025 22:58:00.875414  726389 logs.go:282] 0 containers: []
	W1025 22:58:00.875425  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:00.875437  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:00.875453  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:00.925019  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:00.925057  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:00.938018  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:00.938050  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:01.008170  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:01.008199  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:01.008216  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:01.082487  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:01.082530  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:03.623673  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:03.637286  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:03.637371  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:03.673836  726389 cri.go:89] found id: ""
	I1025 22:58:03.673884  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.673897  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:03.673906  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:03.673971  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:03.706700  726389 cri.go:89] found id: ""
	I1025 22:58:03.706731  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.706742  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:03.706750  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:03.706818  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:03.738775  726389 cri.go:89] found id: ""
	I1025 22:58:03.738804  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.738815  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:03.738823  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:03.738889  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:03.770246  726389 cri.go:89] found id: ""
	I1025 22:58:03.770274  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.770284  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:03.770292  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:03.770366  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:03.811193  726389 cri.go:89] found id: ""
	I1025 22:58:03.811222  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.811231  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:03.811237  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:03.811290  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:03.842644  726389 cri.go:89] found id: ""
	I1025 22:58:03.842678  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.842686  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:03.842693  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:03.842750  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:03.874753  726389 cri.go:89] found id: ""
	I1025 22:58:03.874780  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.874788  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:03.874794  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:03.874845  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:03.907133  726389 cri.go:89] found id: ""
	I1025 22:58:03.907162  726389 logs.go:282] 0 containers: []
	W1025 22:58:03.907173  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:03.907186  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:03.907202  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:03.957250  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:03.957287  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:03.970381  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:03.970408  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:04.033620  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:04.033647  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:04.033663  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:04.108254  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:04.108296  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:06.647214  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:06.660871  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:06.660942  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:06.694191  726389 cri.go:89] found id: ""
	I1025 22:58:06.694223  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.694232  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:06.694243  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:06.694295  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:06.728177  726389 cri.go:89] found id: ""
	I1025 22:58:06.728209  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.728222  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:06.728229  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:06.728300  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:06.761968  726389 cri.go:89] found id: ""
	I1025 22:58:06.762003  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.762015  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:06.762022  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:06.762089  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:06.794139  726389 cri.go:89] found id: ""
	I1025 22:58:06.794172  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.794186  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:06.794195  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:06.794261  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:06.830436  726389 cri.go:89] found id: ""
	I1025 22:58:06.830468  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.830481  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:06.830490  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:06.830557  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:06.865350  726389 cri.go:89] found id: ""
	I1025 22:58:06.865391  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.865405  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:06.865412  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:06.865468  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:06.899259  726389 cri.go:89] found id: ""
	I1025 22:58:06.899288  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.899298  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:06.899304  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:06.899354  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:06.930753  726389 cri.go:89] found id: ""
	I1025 22:58:06.930784  726389 logs.go:282] 0 containers: []
	W1025 22:58:06.930793  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:06.930802  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:06.930813  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:06.943437  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:06.943464  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:07.012837  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:07.012862  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:07.012875  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:07.085555  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:07.085606  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:07.125421  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:07.125464  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:09.678235  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:09.691802  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 22:58:09.691884  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 22:58:09.730774  726389 cri.go:89] found id: ""
	I1025 22:58:09.730813  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.730826  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 22:58:09.730838  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 22:58:09.730893  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 22:58:09.768841  726389 cri.go:89] found id: ""
	I1025 22:58:09.768878  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.768894  726389 logs.go:284] No container was found matching "etcd"
	I1025 22:58:09.768903  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 22:58:09.768984  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 22:58:09.802970  726389 cri.go:89] found id: ""
	I1025 22:58:09.803001  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.803013  726389 logs.go:284] No container was found matching "coredns"
	I1025 22:58:09.803022  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 22:58:09.803093  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 22:58:09.835041  726389 cri.go:89] found id: ""
	I1025 22:58:09.835075  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.835087  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 22:58:09.835095  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 22:58:09.835148  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 22:58:09.868561  726389 cri.go:89] found id: ""
	I1025 22:58:09.868590  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.868601  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 22:58:09.868609  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 22:58:09.868689  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 22:58:09.901694  726389 cri.go:89] found id: ""
	I1025 22:58:09.901721  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.901730  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 22:58:09.901737  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 22:58:09.901793  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 22:58:09.936138  726389 cri.go:89] found id: ""
	I1025 22:58:09.936167  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.936178  726389 logs.go:284] No container was found matching "kindnet"
	I1025 22:58:09.936187  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 22:58:09.936250  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 22:58:09.969041  726389 cri.go:89] found id: ""
	I1025 22:58:09.969075  726389 logs.go:282] 0 containers: []
	W1025 22:58:09.969087  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 22:58:09.969100  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:58:09.969115  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:58:10.036786  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 22:58:10.036816  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 22:58:10.036832  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 22:58:10.108946  726389 logs.go:123] Gathering logs for container status ...
	I1025 22:58:10.109015  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:58:10.150241  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 22:58:10.150278  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 22:58:10.201815  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 22:58:10.201862  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:58:12.715673  726389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:58:12.729286  726389 kubeadm.go:597] duration metric: took 4m4.085037105s to restartPrimaryControlPlane
	W1025 22:58:12.729380  726389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 22:58:12.729407  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 22:58:13.183339  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:58:13.197871  726389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:58:13.207895  726389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:58:13.217907  726389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:58:13.217929  726389 kubeadm.go:157] found existing configuration files:
	
	I1025 22:58:13.217990  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 22:58:13.227351  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 22:58:13.227422  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 22:58:13.237158  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 22:58:13.246361  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 22:58:13.246431  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 22:58:13.256260  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 22:58:13.265821  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 22:58:13.265885  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 22:58:13.275535  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 22:58:13.284737  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 22:58:13.284804  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 22:58:13.294340  726389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 22:58:13.357520  726389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 22:58:13.357618  726389 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 22:58:13.492934  726389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:58:13.493109  726389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:58:13.493237  726389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:58:13.676988  726389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:58:13.679089  726389 out.go:235]   - Generating certificates and keys ...
	I1025 22:58:13.679191  726389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 22:58:13.679294  726389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 22:58:13.679410  726389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 22:58:13.679499  726389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 22:58:13.679591  726389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 22:58:13.679673  726389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 22:58:13.679773  726389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 22:58:13.679860  726389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 22:58:13.679958  726389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 22:58:13.680063  726389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 22:58:13.680117  726389 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 22:58:13.680195  726389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:58:13.792687  726389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:58:13.867665  726389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:58:14.014215  726389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:58:14.157457  726389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:58:14.181574  726389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:58:14.181693  726389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:58:14.181766  726389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 22:58:14.322320  726389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:58:14.324285  726389 out.go:235]   - Booting up control plane ...
	I1025 22:58:14.324402  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:58:14.328027  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:58:14.331034  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:58:14.332233  726389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:58:14.340260  726389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:58:54.338405  726389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:58:54.338592  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:58:54.338841  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:58:59.339365  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:58:59.339661  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:59:09.340395  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:59:09.340593  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 22:59:29.341629  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 22:59:29.341864  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:09.342793  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:09.343142  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:09.343171  726389 kubeadm.go:310] 
	I1025 23:00:09.343244  726389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 23:00:09.343309  726389 kubeadm.go:310] 		timed out waiting for the condition
	I1025 23:00:09.343320  726389 kubeadm.go:310] 
	I1025 23:00:09.343358  726389 kubeadm.go:310] 	This error is likely caused by:
	I1025 23:00:09.343390  726389 kubeadm.go:310] 		- The kubelet is not running
	I1025 23:00:09.343481  726389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 23:00:09.343489  726389 kubeadm.go:310] 
	I1025 23:00:09.343609  726389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 23:00:09.343655  726389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 23:00:09.343701  726389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 23:00:09.343711  726389 kubeadm.go:310] 
	I1025 23:00:09.343811  726389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 23:00:09.343886  726389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 23:00:09.343898  726389 kubeadm.go:310] 
	I1025 23:00:09.344020  726389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 23:00:09.344148  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 23:00:09.344258  726389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 23:00:09.344355  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 23:00:09.344365  726389 kubeadm.go:310] 
	I1025 23:00:09.345056  726389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 23:00:09.345170  726389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 23:00:09.345261  726389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1025 23:00:09.345502  726389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 23:00:09.345550  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1025 23:00:09.805116  726389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 23:00:09.820225  726389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 23:00:09.829679  726389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 23:00:09.829702  726389 kubeadm.go:157] found existing configuration files:
	
	I1025 23:00:09.829756  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 23:00:09.838792  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 23:00:09.838857  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 23:00:09.847823  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 23:00:09.856364  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 23:00:09.856422  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 23:00:09.865400  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 23:00:09.873766  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 23:00:09.873827  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 23:00:09.882969  726389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 23:00:09.891527  726389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 23:00:09.891606  726389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 23:00:09.900940  726389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 23:00:09.969506  726389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1025 23:00:09.969568  726389 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 23:00:10.115097  726389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 23:00:10.115224  726389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 23:00:10.115397  726389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 23:00:10.293601  726389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 23:00:10.296142  726389 out.go:235]   - Generating certificates and keys ...
	I1025 23:00:10.296255  726389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 23:00:10.296361  726389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 23:00:10.296502  726389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 23:00:10.296583  726389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 23:00:10.296676  726389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 23:00:10.296748  726389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 23:00:10.296840  726389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 23:00:10.296949  726389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 23:00:10.297071  726389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 23:00:10.297182  726389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 23:00:10.297236  726389 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 23:00:10.297334  726389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 23:00:10.411124  726389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 23:00:10.530014  726389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 23:00:10.624647  726389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 23:00:10.777858  726389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 23:00:10.797014  726389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 23:00:10.798078  726389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 23:00:10.798168  726389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 23:00:10.940610  726389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 23:00:10.942427  726389 out.go:235]   - Booting up control plane ...
	I1025 23:00:10.942572  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 23:00:10.959667  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 23:00:10.959757  726389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 23:00:10.959910  726389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 23:00:10.963884  726389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 23:00:50.966097  726389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1025 23:00:50.966211  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:50.966448  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:00:55.966794  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:00:55.967051  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:01:05.967421  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:01:05.967674  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:01:25.968507  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:01:25.968765  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:02:05.969405  726389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1025 23:02:05.969627  726389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1025 23:02:05.969639  726389 kubeadm.go:310] 
	I1025 23:02:05.969676  726389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1025 23:02:05.969777  726389 kubeadm.go:310] 		timed out waiting for the condition
	I1025 23:02:05.969821  726389 kubeadm.go:310] 
	I1025 23:02:05.969885  726389 kubeadm.go:310] 	This error is likely caused by:
	I1025 23:02:05.969935  726389 kubeadm.go:310] 		- The kubelet is not running
	I1025 23:02:05.970078  726389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 23:02:05.970092  726389 kubeadm.go:310] 
	I1025 23:02:05.970248  726389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 23:02:05.970290  726389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1025 23:02:05.970375  726389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1025 23:02:05.970388  726389 kubeadm.go:310] 
	I1025 23:02:05.970517  726389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 23:02:05.970595  726389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1025 23:02:05.970602  726389 kubeadm.go:310] 
	I1025 23:02:05.970729  726389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1025 23:02:05.970840  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1025 23:02:05.970914  726389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1025 23:02:05.971019  726389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1025 23:02:05.971031  726389 kubeadm.go:310] 
	I1025 23:02:05.971808  726389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 23:02:05.971923  726389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 23:02:05.972087  726389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1025 23:02:05.972124  726389 kubeadm.go:394] duration metric: took 7m57.377970738s to StartCluster
	I1025 23:02:05.972182  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 23:02:05.972244  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 23:02:06.012800  726389 cri.go:89] found id: ""
	I1025 23:02:06.012837  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.012852  726389 logs.go:284] No container was found matching "kube-apiserver"
	I1025 23:02:06.012860  726389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 23:02:06.012925  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 23:02:06.051712  726389 cri.go:89] found id: ""
	I1025 23:02:06.051748  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.051761  726389 logs.go:284] No container was found matching "etcd"
	I1025 23:02:06.051769  726389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 23:02:06.051834  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 23:02:06.084904  726389 cri.go:89] found id: ""
	I1025 23:02:06.084939  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.084950  726389 logs.go:284] No container was found matching "coredns"
	I1025 23:02:06.084973  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 23:02:06.085056  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 23:02:06.120083  726389 cri.go:89] found id: ""
	I1025 23:02:06.120121  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.120133  726389 logs.go:284] No container was found matching "kube-scheduler"
	I1025 23:02:06.120140  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 23:02:06.120197  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 23:02:06.154172  726389 cri.go:89] found id: ""
	I1025 23:02:06.154197  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.154205  726389 logs.go:284] No container was found matching "kube-proxy"
	I1025 23:02:06.154211  726389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 23:02:06.154261  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 23:02:06.187085  726389 cri.go:89] found id: ""
	I1025 23:02:06.187130  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.187143  726389 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 23:02:06.187152  726389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 23:02:06.187220  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 23:02:06.220391  726389 cri.go:89] found id: ""
	I1025 23:02:06.220421  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.220430  726389 logs.go:284] No container was found matching "kindnet"
	I1025 23:02:06.220437  726389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1025 23:02:06.220503  726389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1025 23:02:06.254240  726389 cri.go:89] found id: ""
	I1025 23:02:06.254274  726389 logs.go:282] 0 containers: []
	W1025 23:02:06.254286  726389 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1025 23:02:06.254301  726389 logs.go:123] Gathering logs for kubelet ...
	I1025 23:02:06.254340  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 23:02:06.301861  726389 logs.go:123] Gathering logs for dmesg ...
	I1025 23:02:06.301907  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 23:02:06.315888  726389 logs.go:123] Gathering logs for describe nodes ...
	I1025 23:02:06.315919  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 23:02:06.386034  726389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 23:02:06.386073  726389 logs.go:123] Gathering logs for CRI-O ...
	I1025 23:02:06.386091  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 23:02:06.487167  726389 logs.go:123] Gathering logs for container status ...
	I1025 23:02:06.487216  726389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 23:02:06.539615  726389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 23:02:06.539690  726389 out.go:270] * 
	W1025 23:02:06.539895  726389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 23:02:06.539922  726389 out.go:270] * 
	W1025 23:02:06.540790  726389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 23:02:06.545196  726389 out.go:201] 
	W1025 23:02:06.546506  726389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 23:02:06.546544  726389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 23:02:06.546564  726389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 23:02:06.548055  726389 out.go:201] 
	
	
	==> CRI-O <==
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.652086396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729898257652065158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69e9ad53-96f3-49c5-a9c7-b4a769cdfd61 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.652617639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=649d3be2-99d0-4330-9d03-3b533bfe8a06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.652714079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=649d3be2-99d0-4330-9d03-3b533bfe8a06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.652774446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=649d3be2-99d0-4330-9d03-3b533bfe8a06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.684793532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1cd17aa-d778-443a-8846-f0f6e12e634d name=/runtime.v1.RuntimeService/Version
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.684881456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1cd17aa-d778-443a-8846-f0f6e12e634d name=/runtime.v1.RuntimeService/Version
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.686186910Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd21cd14-251d-4989-9eea-7f533d3c4916 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.686641564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729898257686616047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd21cd14-251d-4989-9eea-7f533d3c4916 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.687335731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2dcd79fc-211a-4184-a1fe-68dac49d582e name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.687390705Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2dcd79fc-211a-4184-a1fe-68dac49d582e name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.687423130Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2dcd79fc-211a-4184-a1fe-68dac49d582e name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.717524185Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d939304-8cf3-43bd-8b6c-5601fa068f61 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.717619494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d939304-8cf3-43bd-8b6c-5601fa068f61 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.718759994Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9234c44-1c4e-404f-93b0-9e5f9eba3303 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.719149767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729898257719127916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9234c44-1c4e-404f-93b0-9e5f9eba3303 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.719617873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87aaea8f-bf96-421a-8f7c-68dfca432e26 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.719730686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87aaea8f-bf96-421a-8f7c-68dfca432e26 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.719778951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=87aaea8f-bf96-421a-8f7c-68dfca432e26 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.751472321Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d636380c-8134-4a49-a157-e1e0873d29f8 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.751567641Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d636380c-8134-4a49-a157-e1e0873d29f8 name=/runtime.v1.RuntimeService/Version
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.752597602Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c34d316-4d8e-4a6a-bb78-1ba94dcea334 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.753042077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1729898257753017986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c34d316-4d8e-4a6a-bb78-1ba94dcea334 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.753510181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d32599d7-e8bd-49a4-bd22-4002e33623f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.753592219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d32599d7-e8bd-49a4-bd22-4002e33623f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 25 23:17:37 old-k8s-version-005932 crio[631]: time="2024-10-25 23:17:37.753637888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d32599d7-e8bd-49a4-bd22-4002e33623f3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct25 22:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053538] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043103] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.993033] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.634497] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.621665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct25 22:54] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.064930] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061174] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.184894] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.167513] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.254112] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.419742] systemd-fstab-generator[883]: Ignoring "noauto" option for root device
	[  +0.063304] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.826111] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +11.981319] kauditd_printk_skb: 46 callbacks suppressed
	[Oct25 22:58] systemd-fstab-generator[5077]: Ignoring "noauto" option for root device
	[Oct25 23:00] systemd-fstab-generator[5358]: Ignoring "noauto" option for root device
	[  +0.059452] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 23:17:37 up 23 min,  0 users,  load average: 0.05, 0.01, 0.02
	Linux old-k8s-version-005932 5.10.207 #1 SMP Tue Oct 15 19:19:35 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000968d10, 0xc000bb6180)
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: goroutine 153 [chan receive]:
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000a8d0e0)
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: goroutine 154 [select]:
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bcdef0, 0x4f0ac20, 0xc000924ff0, 0x1, 0xc00009e0c0)
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00055ad20, 0xc00009e0c0)
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000968d50, 0xc000bb6240)
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 25 23:17:37 old-k8s-version-005932 kubelet[7252]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 25 23:17:37 old-k8s-version-005932 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 25 23:17:37 old-k8s-version-005932 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 25 23:17:37 old-k8s-version-005932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 182.
	Oct 25 23:17:37 old-k8s-version-005932 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 25 23:17:37 old-k8s-version-005932 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 2 (228.920909ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-005932" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (388.41s)

                                                
                                    

Test pass (277/326)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.92
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 14.71
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 110.88
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 133.38
31 TestAddons/serial/GCPAuth/Namespaces 1.91
32 TestAddons/serial/GCPAuth/FakeCredentials 11.51
35 TestAddons/parallel/Registry 17.11
37 TestAddons/parallel/InspektorGadget 10.71
40 TestAddons/parallel/CSI 63.71
41 TestAddons/parallel/Headlamp 20.91
42 TestAddons/parallel/CloudSpanner 6.59
43 TestAddons/parallel/LocalPath 56.17
44 TestAddons/parallel/NvidiaDevicePlugin 7.07
45 TestAddons/parallel/Yakd 11.73
47 TestAddons/StoppedEnableDisable 91.27
48 TestCertOptions 80.81
49 TestCertExpiration 284.61
51 TestForceSystemdFlag 101.14
52 TestForceSystemdEnv 44.68
54 TestKVMDriverInstallOrUpdate 4.49
58 TestErrorSpam/setup 43.93
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.76
61 TestErrorSpam/pause 1.58
62 TestErrorSpam/unpause 1.86
63 TestErrorSpam/stop 5.32
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 78.32
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 33.2
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.4
75 TestFunctional/serial/CacheCmd/cache/add_local 2.24
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 34.29
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.45
86 TestFunctional/serial/LogsFileCmd 1.46
87 TestFunctional/serial/InvalidService 4.13
89 TestFunctional/parallel/ConfigCmd 0.35
90 TestFunctional/parallel/DashboardCmd 29.96
91 TestFunctional/parallel/DryRun 0.59
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.89
97 TestFunctional/parallel/ServiceCmdConnect 10.92
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 49.84
101 TestFunctional/parallel/SSHCmd 0.44
102 TestFunctional/parallel/CpCmd 1.28
103 TestFunctional/parallel/MySQL 27.58
104 TestFunctional/parallel/FileSync 0.23
105 TestFunctional/parallel/CertSync 1.31
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
113 TestFunctional/parallel/License 0.62
114 TestFunctional/parallel/ServiceCmd/DeployApp 12.19
115 TestFunctional/parallel/Version/short 0.07
116 TestFunctional/parallel/Version/components 0.66
117 TestFunctional/parallel/ImageCommands/ImageListShort 1.24
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
121 TestFunctional/parallel/ImageCommands/ImageBuild 7.62
122 TestFunctional/parallel/ImageCommands/Setup 1.85
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.61
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.95
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.82
127 TestFunctional/parallel/ImageCommands/ImageRemove 1.83
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.94
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
130 TestFunctional/parallel/ProfileCmd/profile_list 0.44
131 TestFunctional/parallel/ServiceCmd/List 0.35
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
135 TestFunctional/parallel/MountCmd/any-port 8.57
136 TestFunctional/parallel/ServiceCmd/Format 0.31
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
138 TestFunctional/parallel/ServiceCmd/URL 0.33
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
142 TestFunctional/parallel/MountCmd/specific-port 1.96
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 197.03
160 TestMultiControlPlane/serial/DeployApp 8.22
161 TestMultiControlPlane/serial/PingHostFromPods 1.22
162 TestMultiControlPlane/serial/AddWorkerNode 57.79
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
165 TestMultiControlPlane/serial/CopyFile 13.17
166 TestMultiControlPlane/serial/StopSecondaryNode 91.65
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
168 TestMultiControlPlane/serial/RestartSecondaryNode 52.54
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 431.18
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.89
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
173 TestMultiControlPlane/serial/StopCluster 272.95
174 TestMultiControlPlane/serial/RestartCluster 124.64
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 78.3
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
181 TestJSONOutput/start/Command 55.26
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.67
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.63
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.35
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 93.37
213 TestMountStart/serial/StartWithMountFirst 28.55
214 TestMountStart/serial/VerifyMountFirst 0.38
215 TestMountStart/serial/StartWithMountSecond 29.51
216 TestMountStart/serial/VerifyMountSecond 0.38
217 TestMountStart/serial/DeleteFirst 0.68
218 TestMountStart/serial/VerifyMountPostDelete 0.39
219 TestMountStart/serial/Stop 1.33
220 TestMountStart/serial/RestartStopped 23.18
221 TestMountStart/serial/VerifyMountPostStop 0.38
224 TestMultiNode/serial/FreshStart2Nodes 115.83
225 TestMultiNode/serial/DeployApp2Nodes 5.34
226 TestMultiNode/serial/PingHostFrom2Pods 0.81
227 TestMultiNode/serial/AddNode 50.08
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.59
230 TestMultiNode/serial/CopyFile 7.35
231 TestMultiNode/serial/StopNode 2.28
232 TestMultiNode/serial/StartAfterStop 39.46
233 TestMultiNode/serial/RestartKeepsNodes 344.14
234 TestMultiNode/serial/DeleteNode 2.25
235 TestMultiNode/serial/StopMultiNode 181.68
236 TestMultiNode/serial/RestartMultiNode 113.81
237 TestMultiNode/serial/ValidateNameConflict 44.37
244 TestScheduledStopUnix 112.44
248 TestRunningBinaryUpgrade 245.35
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
254 TestNoKubernetes/serial/StartWithK8s 94.01
255 TestNoKubernetes/serial/StartWithStopK8s 42.97
256 TestNoKubernetes/serial/Start 55.78
264 TestNetworkPlugins/group/false 2.98
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
269 TestNoKubernetes/serial/ProfileList 29.26
270 TestNoKubernetes/serial/Stop 2.56
271 TestNoKubernetes/serial/StartNoArgs 23.23
272 TestStoppedBinaryUpgrade/Setup 3.03
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
274 TestStoppedBinaryUpgrade/Upgrade 118.51
283 TestPause/serial/Start 97.72
284 TestPause/serial/SecondStartNoReconfiguration 38.56
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
286 TestNetworkPlugins/group/auto/Start 54.48
287 TestPause/serial/Pause 0.72
288 TestPause/serial/VerifyStatus 0.24
289 TestPause/serial/Unpause 0.66
290 TestPause/serial/PauseAgain 0.79
291 TestPause/serial/DeletePaused 0.81
292 TestPause/serial/VerifyDeletedResources 3.72
293 TestNetworkPlugins/group/kindnet/Start 66.6
294 TestNetworkPlugins/group/calico/Start 102.11
295 TestNetworkPlugins/group/auto/KubeletFlags 0.21
296 TestNetworkPlugins/group/auto/NetCatPod 13.26
297 TestNetworkPlugins/group/auto/DNS 0.16
298 TestNetworkPlugins/group/auto/Localhost 0.12
299 TestNetworkPlugins/group/auto/HairPin 0.13
300 TestNetworkPlugins/group/custom-flannel/Start 69.51
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
304 TestNetworkPlugins/group/kindnet/DNS 0.17
305 TestNetworkPlugins/group/kindnet/Localhost 0.14
306 TestNetworkPlugins/group/kindnet/HairPin 0.14
307 TestNetworkPlugins/group/enable-default-cni/Start 89.23
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/KubeletFlags 0.22
310 TestNetworkPlugins/group/calico/NetCatPod 12.25
311 TestNetworkPlugins/group/calico/DNS 0.16
312 TestNetworkPlugins/group/calico/Localhost 0.14
313 TestNetworkPlugins/group/calico/HairPin 0.13
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.33
316 TestNetworkPlugins/group/flannel/Start 83.82
317 TestNetworkPlugins/group/custom-flannel/DNS 0.17
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
320 TestNetworkPlugins/group/bridge/Start 95.35
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
327 TestStartStop/group/embed-certs/serial/FirstStart 90.19
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
330 TestNetworkPlugins/group/flannel/NetCatPod 11.2
333 TestNetworkPlugins/group/flannel/DNS 0.18
334 TestNetworkPlugins/group/flannel/Localhost 0.14
335 TestNetworkPlugins/group/flannel/HairPin 0.14
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
337 TestNetworkPlugins/group/bridge/NetCatPod 11.24
339 TestStartStop/group/no-preload/serial/FirstStart 74.08
340 TestNetworkPlugins/group/bridge/DNS 0.22
341 TestNetworkPlugins/group/bridge/Localhost 0.16
342 TestNetworkPlugins/group/bridge/HairPin 0.16
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.56
345 TestStartStop/group/embed-certs/serial/DeployApp 9.63
346 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
347 TestStartStop/group/embed-certs/serial/Stop 91.03
348 TestStartStop/group/no-preload/serial/DeployApp 10.26
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
350 TestStartStop/group/no-preload/serial/Stop 91.08
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.27
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
355 TestStartStop/group/embed-certs/serial/SecondStart 325.5
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.54
357 TestStartStop/group/no-preload/serial/SecondStart 312.8
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 375.96
362 TestStartStop/group/old-k8s-version/serial/Stop 1.37
363 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
367 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
368 TestStartStop/group/embed-certs/serial/Pause 2.76
370 TestStartStop/group/newest-cni/serial/FirstStart 48.64
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
374 TestStartStop/group/no-preload/serial/Pause 2.65
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
377 TestStartStop/group/newest-cni/serial/Stop 10.53
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
379 TestStartStop/group/newest-cni/serial/SecondStart 38.01
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.79
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/newest-cni/serial/Pause 2.56
x
+
TestDownloadOnly/v1.20.0/json-events (24.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-719988 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-719988 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.92236298s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1025 21:35:34.842535  669177 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1025 21:35:34.842691  669177 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-719988
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-719988: exit status 85 (69.327383ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-719988 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC |          |
	|         | -p download-only-719988        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 21:35:09
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:35:09.963379  669189 out.go:345] Setting OutFile to fd 1 ...
	I1025 21:35:09.963511  669189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:09.963521  669189 out.go:358] Setting ErrFile to fd 2...
	I1025 21:35:09.963525  669189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:09.963735  669189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	W1025 21:35:09.963869  669189 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19758-661979/.minikube/config/config.json: open /home/jenkins/minikube-integration/19758-661979/.minikube/config/config.json: no such file or directory
	I1025 21:35:09.964473  669189 out.go:352] Setting JSON to true
	I1025 21:35:09.965464  669189 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":15454,"bootTime":1729876656,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:35:09.965534  669189 start.go:139] virtualization: kvm guest
	I1025 21:35:09.968252  669189 out.go:97] [download-only-719988] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1025 21:35:09.968404  669189 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 21:35:09.968423  669189 notify.go:220] Checking for updates...
	I1025 21:35:09.969793  669189 out.go:169] MINIKUBE_LOCATION=19758
	I1025 21:35:09.971228  669189 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:35:09.972781  669189 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 21:35:09.974101  669189 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:35:09.975427  669189 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 21:35:09.977942  669189 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 21:35:09.978166  669189 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 21:35:10.013364  669189 out.go:97] Using the kvm2 driver based on user configuration
	I1025 21:35:10.013401  669189 start.go:297] selected driver: kvm2
	I1025 21:35:10.013411  669189 start.go:901] validating driver "kvm2" against <nil>
	I1025 21:35:10.013768  669189 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:10.013889  669189 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 21:35:10.029435  669189 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 21:35:10.029479  669189 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 21:35:10.030015  669189 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1025 21:35:10.030165  669189 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 21:35:10.030205  669189 cni.go:84] Creating CNI manager for ""
	I1025 21:35:10.030264  669189 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 21:35:10.030273  669189 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 21:35:10.030336  669189 start.go:340] cluster config:
	{Name:download-only-719988 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-719988 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 21:35:10.030519  669189 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:10.032518  669189 out.go:97] Downloading VM boot image ...
	I1025 21:35:10.032569  669189 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19758-661979/.minikube/cache/iso/amd64/minikube-v1.34.0-1729002252-19806-amd64.iso
	I1025 21:35:19.934536  669189 out.go:97] Starting "download-only-719988" primary control-plane node in "download-only-719988" cluster
	I1025 21:35:19.934573  669189 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1025 21:35:20.035043  669189 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1025 21:35:20.035092  669189 cache.go:56] Caching tarball of preloaded images
	I1025 21:35:20.035293  669189 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1025 21:35:20.037434  669189 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1025 21:35:20.037457  669189 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:35:20.227700  669189 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-719988 host does not exist
	  To start a cluster, run: "minikube start -p download-only-719988"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-719988
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (14.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-941359 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-941359 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.711916487s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (14.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1025 21:35:49.899297  669177 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1025 21:35:49.899361  669177 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-941359
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-941359: exit status 85 (64.571188ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-719988 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC |                     |
	|         | -p download-only-719988        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| delete  | -p download-only-719988        | download-only-719988 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC | 25 Oct 24 21:35 UTC |
	| start   | -o=json --download-only        | download-only-941359 | jenkins | v1.34.0 | 25 Oct 24 21:35 UTC |                     |
	|         | -p download-only-941359        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 21:35:35
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:35:35.230108  669443 out.go:345] Setting OutFile to fd 1 ...
	I1025 21:35:35.230421  669443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:35.230433  669443 out.go:358] Setting ErrFile to fd 2...
	I1025 21:35:35.230437  669443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:35:35.230618  669443 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 21:35:35.231174  669443 out.go:352] Setting JSON to true
	I1025 21:35:35.232114  669443 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":15479,"bootTime":1729876656,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:35:35.232220  669443 start.go:139] virtualization: kvm guest
	I1025 21:35:35.234775  669443 out.go:97] [download-only-941359] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:35:35.234938  669443 notify.go:220] Checking for updates...
	I1025 21:35:35.236663  669443 out.go:169] MINIKUBE_LOCATION=19758
	I1025 21:35:35.238421  669443 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:35:35.240079  669443 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 21:35:35.241468  669443 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:35:35.242816  669443 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 21:35:35.245449  669443 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 21:35:35.245675  669443 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 21:35:35.277960  669443 out.go:97] Using the kvm2 driver based on user configuration
	I1025 21:35:35.277992  669443 start.go:297] selected driver: kvm2
	I1025 21:35:35.277998  669443 start.go:901] validating driver "kvm2" against <nil>
	I1025 21:35:35.278337  669443 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:35.278434  669443 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19758-661979/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 21:35:35.293859  669443 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1025 21:35:35.293921  669443 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 21:35:35.294451  669443 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1025 21:35:35.294594  669443 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 21:35:35.294624  669443 cni.go:84] Creating CNI manager for ""
	I1025 21:35:35.294681  669443 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1025 21:35:35.294688  669443 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 21:35:35.294740  669443 start.go:340] cluster config:
	{Name:download-only-941359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-941359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 21:35:35.294838  669443 iso.go:125] acquiring lock: {Name:mk58f5ded1dd1a6cef12f07ae13108a7f83e0355 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:35:35.296714  669443 out.go:97] Starting "download-only-941359" primary control-plane node in "download-only-941359" cluster
	I1025 21:35:35.296728  669443 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 21:35:35.852944  669443 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1025 21:35:35.853006  669443 cache.go:56] Caching tarball of preloaded images
	I1025 21:35:35.853175  669443 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1025 21:35:35.855251  669443 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1025 21:35:35.855273  669443 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1025 21:35:35.964805  669443 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19758-661979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-941359 host does not exist
	  To start a cluster, run: "minikube start -p download-only-941359"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-941359
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 21:35:50.488869  669177 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-275962 --alsologtostderr --binary-mirror http://127.0.0.1:41967 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-275962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-275962
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (110.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-506410 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-506410 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m49.962869243s)
helpers_test.go:175: Cleaning up "offline-crio-506410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-506410
--- PASS: TestOffline (110.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-413632
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-413632: exit status 85 (55.461577ms)

                                                
                                                
-- stdout --
	* Profile "addons-413632" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-413632"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-413632
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-413632: exit status 85 (54.140963ms)

                                                
                                                
-- stdout --
	* Profile "addons-413632" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-413632"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (133.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-413632 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-413632 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m13.381890211s)
--- PASS: TestAddons/Setup (133.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-413632 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-413632 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-413632 get secret gcp-auth -n new-namespace: exit status 1 (91.283423ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-413632 logs -l app=gcp-auth -n gcp-auth
I1025 21:38:05.068262  669177 retry.go:31] will retry after 1.624938186s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/10/25 21:38:03 GCP Auth Webhook started!
	2024/10/25 21:38:04 Ready to marshal response ...
	2024/10/25 21:38:04 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-413632 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-413632 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-413632 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8a5d80ed-a009-46e7-b426-d6655a8413e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8a5d80ed-a009-46e7-b426-d6655a8413e2] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004636256s
addons_test.go:633: (dbg) Run:  kubectl --context addons-413632 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-413632 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-413632 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.92314ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-xj8xz" [e20b3155-ea05-4981-a773-3c2c98521771] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00390379s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kpm4c" [211d5f74-7b9d-4d8c-bcdb-bce343e97d06] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003708649s
addons_test.go:331: (dbg) Run:  kubectl --context addons-413632 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-413632 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-413632 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.289613746s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 ip
2024/10/25 21:38:46 [DEBUG] GET http://192.168.39.223:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2hssp" [50e9880a-e25c-426e-b796-e13d8c37ace9] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004760871s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 addons disable inspektor-gadget --alsologtostderr -v=1: (5.704082485s)
--- PASS: TestAddons/parallel/InspektorGadget (10.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 21:39:13.706497  669177 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 21:39:13.711838  669177 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 21:39:13.711861  669177 kapi.go:107] duration metric: took 5.368799ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.376459ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fd0f7401-37aa-466a-a55e-88afc93437f7] Pending
helpers_test.go:344: "task-pv-pod" [fd0f7401-37aa-466a-a55e-88afc93437f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fd0f7401-37aa-466a-a55e-88afc93437f7] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004870544s
addons_test.go:511: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-413632 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-413632 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-413632 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-413632 delete pod task-pv-pod: (1.065427684s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-413632 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-413632 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4c846fcd-cd87-4881-9639-e90cd9c3c640] Pending
helpers_test.go:344: "task-pv-pod-restore" [4c846fcd-cd87-4881-9639-e90cd9c3c640] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4c846fcd-cd87-4881-9639-e90cd9c3c640] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003726916s
addons_test.go:553: (dbg) Run:  kubectl --context addons-413632 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-413632 delete pod task-pv-pod-restore: (1.676028891s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-413632 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-413632 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.770107691s)
--- PASS: TestAddons/parallel/CSI (63.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-413632 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-2sfh2" [da287fd6-a73c-40e5-bf14-d636f653f07d] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-2sfh2" [da287fd6-a73c-40e5-bf14-d636f653f07d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-2sfh2" [da287fd6-a73c-40e5-bf14-d636f653f07d] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004492687s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 addons disable headlamp --alsologtostderr -v=1: (6.000294997s)
--- PASS: TestAddons/parallel/Headlamp (20.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-nd7tx" [a08730ce-5f50-49ee-acc3-7b767ff0f6d2] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003239685s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-413632 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-413632 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8a5a24ce-9b2d-4dba-84be-1e5fb89dc8e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8a5a24ce-9b2d-4dba-84be-1e5fb89dc8e9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8a5a24ce-9b2d-4dba-84be-1e5fb89dc8e9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003873701s
addons_test.go:906: (dbg) Run:  kubectl --context addons-413632 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 ssh "cat /opt/local-path-provisioner/pvc-635e1fba-296d-4aed-ae47-8b59b1722843_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-413632 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-413632 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.329094518s)
--- PASS: TestAddons/parallel/LocalPath (56.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.07s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-k298m" [b318342e-76c3-477e-8d99-38359ebef6bf] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004042966s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.062619698s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.07s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5n2hp" [4477ceb6-4e34-4256-8c34-0a3fc7688629] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004730911s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-413632 addons disable yakd --alsologtostderr -v=1: (5.720124054s)
--- PASS: TestAddons/parallel/Yakd (11.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-413632
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-413632: (1m30.974469902s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-413632
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-413632
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-413632
--- PASS: TestAddons/StoppedEnableDisable (91.27s)

                                                
                                    
x
+
TestCertOptions (80.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-734897 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1025 22:39:40.903105  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-734897 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m19.25020018s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-734897 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-734897 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-734897 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-734897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-734897
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-734897: (1.037121272s)
--- PASS: TestCertOptions (80.81s)

                                                
                                    
x
+
TestCertExpiration (284.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-928371 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-928371 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m12.817747703s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-928371 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-928371 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (30.654838717s)
helpers_test.go:175: Cleaning up "cert-expiration-928371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-928371
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-928371: (1.133408514s)
--- PASS: TestCertExpiration (284.61s)

                                                
                                    
x
+
TestForceSystemdFlag (101.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-969667 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1025 22:37:50.020579  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:38:06.951289  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-969667 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m40.065272606s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-969667 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-969667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-969667
--- PASS: TestForceSystemdFlag (101.14s)

                                                
                                    
x
+
TestForceSystemdEnv (44.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-542171 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-542171 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.877591847s)
helpers_test.go:175: Cleaning up "force-systemd-env-542171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-542171
--- PASS: TestForceSystemdEnv (44.68s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.49s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1025 22:40:59.118459  669177 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 22:40:59.118641  669177 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1025 22:40:59.153526  669177 install.go:62] docker-machine-driver-kvm2: exit status 1
W1025 22:40:59.154018  669177 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1025 22:40:59.154099  669177 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate803617376/001/docker-machine-driver-kvm2
I1025 22:40:59.402830  669177 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate803617376/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc0004b4dd0 gz:0xc0004b4dd8 tar:0xc0004b4d80 tar.bz2:0xc0004b4d90 tar.gz:0xc0004b4da0 tar.xz:0xc0004b4db0 tar.zst:0xc0004b4dc0 tbz2:0xc0004b4d90 tgz:0xc0004b4da0 txz:0xc0004b4db0 tzst:0xc0004b4dc0 xz:0xc0004b4de0 zip:0xc0004b4df0 zst:0xc0004b4de8] Getters:map[file:0xc00198f8d0 http:0xc00077f220 https:0xc00077f270] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1025 22:40:59.402906  669177 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate803617376/001/docker-machine-driver-kvm2
I1025 22:41:01.715253  669177 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 22:41:01.715342  669177 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1025 22:41:01.748779  669177 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1025 22:41:01.748814  669177 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1025 22:41:01.748881  669177 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1025 22:41:01.748919  669177 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate803617376/002/docker-machine-driver-kvm2
I1025 22:41:01.807566  669177 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate803617376/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020 0x5308020] Decompressors:map[bz2:0xc0004b4dd0 gz:0xc0004b4dd8 tar:0xc0004b4d80 tar.bz2:0xc0004b4d90 tar.gz:0xc0004b4da0 tar.xz:0xc0004b4db0 tar.zst:0xc0004b4dc0 tbz2:0xc0004b4d90 tgz:0xc0004b4da0 txz:0xc0004b4db0 tzst:0xc0004b4dc0 xz:0xc0004b4de0 zip:0xc0004b4df0 zst:0xc0004b4de8] Getters:map[file:0xc001e93e90 http:0xc0007f7cc0 https:0xc0007f7d10] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1025 22:41:01.807628  669177 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate803617376/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.49s)

                                                
                                    
x
+
TestErrorSpam/setup (43.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-238139 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-238139 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-238139 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-238139 --driver=kvm2  --container-runtime=crio: (43.933443469s)
--- PASS: TestErrorSpam/setup (43.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (5.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 stop: (2.326544706s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 stop: (1.662588148s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-238139 --log_dir /tmp/nospam-238139 stop: (1.326982933s)
--- PASS: TestErrorSpam/stop (5.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19758-661979/.minikube/files/etc/test/nested/copy/669177/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889777 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1025 21:48:06.949887  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:06.956272  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:06.967623  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:06.989055  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:07.030510  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:07.112060  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:07.273605  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:07.595325  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:08.237364  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:09.518970  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:12.082097  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:17.204209  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-889777 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m18.315484967s)
--- PASS: TestFunctional/serial/StartWithProxy (78.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 21:48:18.168232  669177 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889777 --alsologtostderr -v=8
E1025 21:48:27.446325  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:48:47.928318  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-889777 --alsologtostderr -v=8: (33.194484066s)
functional_test.go:663: soft start took 33.195139406s for "functional-889777" cluster.
I1025 21:48:51.363047  669177 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (33.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-889777 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 cache add registry.k8s.io/pause:3.1: (1.123037034s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 cache add registry.k8s.io/pause:3.3: (1.181248435s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 cache add registry.k8s.io/pause:latest: (1.100252526s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-889777 /tmp/TestFunctionalserialCacheCmdcacheadd_local939250591/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 cache add minikube-local-cache-test:functional-889777
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 cache add minikube-local-cache-test:functional-889777: (1.921610223s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 cache delete minikube-local-cache-test:functional-889777
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-889777
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.828824ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 cache reload: (1.021660247s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 kubectl -- --context functional-889777 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-889777 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889777 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 21:49:28.890251  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-889777 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.287940438s)
functional_test.go:761: restart took 34.288115486s for "functional-889777" cluster.
I1025 21:49:33.790869  669177 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-889777 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 logs: (1.449634399s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 logs --file /tmp/TestFunctionalserialLogsFileCmd1193201709/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 logs --file /tmp/TestFunctionalserialLogsFileCmd1193201709/001/logs.txt: (1.457650924s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-889777 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-889777
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-889777: exit status 115 (278.520907ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.12:30520 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-889777 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 config get cpus: exit status 14 (61.823199ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 config get cpus: exit status 14 (48.866236ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-889777 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-889777 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 678248: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889777 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-889777 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (153.848028ms)

                                                
                                                
-- stdout --
	* [functional-889777] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19758
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:50:05.987318  678762 out.go:345] Setting OutFile to fd 1 ...
	I1025 21:50:05.987451  678762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:50:05.987461  678762 out.go:358] Setting ErrFile to fd 2...
	I1025 21:50:05.987467  678762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:50:05.987733  678762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 21:50:05.988423  678762 out.go:352] Setting JSON to false
	I1025 21:50:05.989884  678762 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":16350,"bootTime":1729876656,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:50:05.990043  678762 start.go:139] virtualization: kvm guest
	I1025 21:50:05.992356  678762 out.go:177] * [functional-889777] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:50:05.993657  678762 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 21:50:05.993664  678762 notify.go:220] Checking for updates...
	I1025 21:50:05.996073  678762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:50:05.997304  678762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 21:50:05.998645  678762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:50:06.000084  678762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:50:06.001554  678762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:50:06.003597  678762 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 21:50:06.004034  678762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:50:06.004117  678762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:50:06.020381  678762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I1025 21:50:06.020918  678762 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:50:06.021579  678762 main.go:141] libmachine: Using API Version  1
	I1025 21:50:06.021601  678762 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:50:06.021969  678762 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:50:06.022167  678762 main.go:141] libmachine: (functional-889777) Calling .DriverName
	I1025 21:50:06.022438  678762 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 21:50:06.022793  678762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:50:06.022842  678762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:50:06.039282  678762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I1025 21:50:06.039828  678762 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:50:06.040452  678762 main.go:141] libmachine: Using API Version  1
	I1025 21:50:06.040485  678762 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:50:06.040834  678762 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:50:06.041048  678762 main.go:141] libmachine: (functional-889777) Calling .DriverName
	I1025 21:50:06.074805  678762 out.go:177] * Using the kvm2 driver based on existing profile
	I1025 21:50:06.076154  678762 start.go:297] selected driver: kvm2
	I1025 21:50:06.076167  678762 start.go:901] validating driver "kvm2" against &{Name:functional-889777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-889777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 21:50:06.076324  678762 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:50:06.078343  678762 out.go:201] 
	W1025 21:50:06.079491  678762 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 21:50:06.080585  678762 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889777 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-889777 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-889777 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.76438ms)

                                                
                                                
-- stdout --
	* [functional-889777] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19758
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:49:52.413120  677255 out.go:345] Setting OutFile to fd 1 ...
	I1025 21:49:52.413236  677255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:49:52.413245  677255 out.go:358] Setting ErrFile to fd 2...
	I1025 21:49:52.413249  677255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:49:52.413520  677255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 21:49:52.414054  677255 out.go:352] Setting JSON to false
	I1025 21:49:52.415052  677255 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":16336,"bootTime":1729876656,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:49:52.415165  677255 start.go:139] virtualization: kvm guest
	I1025 21:49:52.417560  677255 out.go:177] * [functional-889777] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1025 21:49:52.419286  677255 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 21:49:52.419294  677255 notify.go:220] Checking for updates...
	I1025 21:49:52.421124  677255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:49:52.422663  677255 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 21:49:52.424067  677255 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 21:49:52.425398  677255 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:49:52.426801  677255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:49:52.428468  677255 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 21:49:52.428872  677255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:49:52.428945  677255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:49:52.444272  677255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I1025 21:49:52.444796  677255 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:49:52.445377  677255 main.go:141] libmachine: Using API Version  1
	I1025 21:49:52.445399  677255 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:49:52.445784  677255 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:49:52.445965  677255 main.go:141] libmachine: (functional-889777) Calling .DriverName
	I1025 21:49:52.446287  677255 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 21:49:52.446722  677255 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:49:52.446773  677255 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:49:52.461813  677255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38039
	I1025 21:49:52.462296  677255 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:49:52.462846  677255 main.go:141] libmachine: Using API Version  1
	I1025 21:49:52.462872  677255 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:49:52.463230  677255 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:49:52.463413  677255 main.go:141] libmachine: (functional-889777) Calling .DriverName
	I1025 21:49:52.495318  677255 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1025 21:49:52.496785  677255 start.go:297] selected driver: kvm2
	I1025 21:49:52.496797  677255 start.go:901] validating driver "kvm2" against &{Name:functional-889777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-889777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 21:49:52.496975  677255 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:49:52.499277  677255 out.go:201] 
	W1025 21:49:52.500747  677255 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 21:49:52.502251  677255 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-889777 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-889777 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vkwgt" [0967e9f2-fe3f-49d5-82a1-62f8da7b0768] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-vkwgt" [0967e9f2-fe3f-49d5-82a1-62f8da7b0768] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.408617136s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.12:32416
functional_test.go:1675: http://192.168.39.12:32416: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-vkwgt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.12:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.12:32416
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.92s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [751ef9a6-0ec4-49bf-b382-b868f70ad733] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.021535971s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-889777 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-889777 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-889777 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-889777 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d1be7933-d089-4ec8-a8a0-b06d453d4594] Pending
helpers_test.go:344: "sp-pod" [d1be7933-d089-4ec8-a8a0-b06d453d4594] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d1be7933-d089-4ec8-a8a0-b06d453d4594] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004812278s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-889777 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-889777 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-889777 delete -f testdata/storage-provisioner/pod.yaml: (2.438108573s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-889777 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c96c6e16-eba5-4255-8085-7560b7443f9d] Pending
helpers_test.go:344: "sp-pod" [c96c6e16-eba5-4255-8085-7560b7443f9d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c96c6e16-eba5-4255-8085-7560b7443f9d] Running
2024/10/25 21:50:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004006398s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-889777 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh -n functional-889777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 cp functional-889777:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4126657117/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh -n functional-889777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh -n functional-889777 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-889777 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-5kr84" [c5635acf-82f2-40f1-9de2-5921e39fd7c7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-5kr84" [c5635acf-82f2-40f1-9de2-5921e39fd7c7] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.003617758s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-889777 exec mysql-6cdb49bbb-5kr84 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-889777 exec mysql-6cdb49bbb-5kr84 -- mysql -ppassword -e "show databases;": exit status 1 (177.834058ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 21:50:20.049006  669177 retry.go:31] will retry after 752.346567ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-889777 exec mysql-6cdb49bbb-5kr84 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-889777 exec mysql-6cdb49bbb-5kr84 -- mysql -ppassword -e "show databases;": exit status 1 (145.413171ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1025 21:50:20.947202  669177 retry.go:31] will retry after 2.024079027s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-889777 exec mysql-6cdb49bbb-5kr84 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.58s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/669177/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo cat /etc/test/nested/copy/669177/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/669177.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo cat /etc/ssl/certs/669177.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/669177.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo cat /usr/share/ca-certificates/669177.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/6691772.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo cat /etc/ssl/certs/6691772.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/6691772.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo cat /usr/share/ca-certificates/6691772.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-889777 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 ssh "sudo systemctl is-active docker": exit status 1 (273.272453ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 ssh "sudo systemctl is-active containerd": exit status 1 (239.806961ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-889777 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-889777 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-nf9jw" [3524e4b4-2cc3-4c24-9764-4c4eb3e55faf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-nf9jw" [3524e4b4-2cc3-4c24-9764-4c4eb3e55faf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004107705s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 image ls --format short --alsologtostderr: (1.242269064s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889777 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-889777
localhost/kicbase/echo-server:functional-889777
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889777 image ls --format short --alsologtostderr:
I1025 21:50:07.353032  678859 out.go:345] Setting OutFile to fd 1 ...
I1025 21:50:07.353322  678859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:07.353334  678859 out.go:358] Setting ErrFile to fd 2...
I1025 21:50:07.353341  678859 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:07.353644  678859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
I1025 21:50:07.354474  678859 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:07.354651  678859 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:07.355205  678859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:07.355292  678859 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:07.371306  678859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
I1025 21:50:07.371877  678859 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:07.372586  678859 main.go:141] libmachine: Using API Version  1
I1025 21:50:07.372630  678859 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:07.373044  678859 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:07.373264  678859 main.go:141] libmachine: (functional-889777) Calling .GetState
I1025 21:50:07.375587  678859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:07.375654  678859 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:07.391428  678859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40381
I1025 21:50:07.391897  678859 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:07.392472  678859 main.go:141] libmachine: Using API Version  1
I1025 21:50:07.392499  678859 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:07.392880  678859 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:07.393169  678859 main.go:141] libmachine: (functional-889777) Calling .DriverName
I1025 21:50:07.393492  678859 ssh_runner.go:195] Run: systemctl --version
I1025 21:50:07.393542  678859 main.go:141] libmachine: (functional-889777) Calling .GetSSHHostname
I1025 21:50:07.397149  678859 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:07.397599  678859 main.go:141] libmachine: (functional-889777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:cf:9b", ip: ""} in network mk-functional-889777: {Iface:virbr1 ExpiryTime:2024-10-25 22:47:14 +0000 UTC Type:0 Mac:52:54:00:74:cf:9b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-889777 Clientid:01:52:54:00:74:cf:9b}
I1025 21:50:07.397632  678859 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:07.397790  678859 main.go:141] libmachine: (functional-889777) Calling .GetSSHPort
I1025 21:50:07.397993  678859 main.go:141] libmachine: (functional-889777) Calling .GetSSHKeyPath
I1025 21:50:07.398150  678859 main.go:141] libmachine: (functional-889777) Calling .GetSSHUsername
I1025 21:50:07.398309  678859 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/functional-889777/id_rsa Username:docker}
I1025 21:50:07.518190  678859 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 21:50:08.537060  678859 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.018827423s)
I1025 21:50:08.537469  678859 main.go:141] libmachine: Making call to close driver server
I1025 21:50:08.537485  678859 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:08.537788  678859 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:08.537807  678859 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:50:08.537824  678859 main.go:141] libmachine: Making call to close driver server
I1025 21:50:08.537842  678859 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:08.538090  678859 main.go:141] libmachine: (functional-889777) DBG | Closing plugin on server side
I1025 21:50:08.538127  678859 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:08.538134  678859 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889777 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| localhost/kicbase/echo-server           | functional-889777  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 3b25b682ea82b | 196MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| localhost/minikube-local-cache-test     | functional-889777  | 3746527ba46e7 | 3.33kB |
| localhost/my-image                      | functional-889777  | 49b68331dccfc | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889777 image ls --format table --alsologtostderr:
I1025 21:50:16.730586  679040 out.go:345] Setting OutFile to fd 1 ...
I1025 21:50:16.730777  679040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:16.730791  679040 out.go:358] Setting ErrFile to fd 2...
I1025 21:50:16.730798  679040 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:16.731066  679040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
I1025 21:50:16.731919  679040 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:16.732075  679040 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:16.732481  679040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:16.732545  679040 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:16.747935  679040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
I1025 21:50:16.748413  679040 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:16.749010  679040 main.go:141] libmachine: Using API Version  1
I1025 21:50:16.749043  679040 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:16.749438  679040 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:16.749635  679040 main.go:141] libmachine: (functional-889777) Calling .GetState
I1025 21:50:16.751567  679040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:16.751612  679040 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:16.766690  679040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
I1025 21:50:16.767089  679040 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:16.767548  679040 main.go:141] libmachine: Using API Version  1
I1025 21:50:16.767573  679040 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:16.767906  679040 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:16.768102  679040 main.go:141] libmachine: (functional-889777) Calling .DriverName
I1025 21:50:16.768326  679040 ssh_runner.go:195] Run: systemctl --version
I1025 21:50:16.768348  679040 main.go:141] libmachine: (functional-889777) Calling .GetSSHHostname
I1025 21:50:16.770941  679040 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:16.771417  679040 main.go:141] libmachine: (functional-889777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:cf:9b", ip: ""} in network mk-functional-889777: {Iface:virbr1 ExpiryTime:2024-10-25 22:47:14 +0000 UTC Type:0 Mac:52:54:00:74:cf:9b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-889777 Clientid:01:52:54:00:74:cf:9b}
I1025 21:50:16.771446  679040 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:16.771683  679040 main.go:141] libmachine: (functional-889777) Calling .GetSSHPort
I1025 21:50:16.771925  679040 main.go:141] libmachine: (functional-889777) Calling .GetSSHKeyPath
I1025 21:50:16.772087  679040 main.go:141] libmachine: (functional-889777) Calling .GetSSHUsername
I1025 21:50:16.772234  679040 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/functional-889777/id_rsa Username:docker}
I1025 21:50:16.855794  679040 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 21:50:16.902253  679040 main.go:141] libmachine: Making call to close driver server
I1025 21:50:16.902274  679040 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:16.902664  679040 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:16.902685  679040 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:50:16.902688  679040 main.go:141] libmachine: (functional-889777) DBG | Closing plugin on server side
I1025 21:50:16.902708  679040 main.go:141] libmachine: Making call to close driver server
I1025 21:50:16.902716  679040 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:16.902948  679040 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:16.902969  679040 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:50:16.902974  679040 main.go:141] libmachine: (functional-889777) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889777 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b9
7a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.
io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-control
ler-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"7645140245807445d005dab80655742498235717daa6a867ddd9ec6e2d0a6be5","repoDigests":["docker.io/library/9fcfe977d5115fbb26489f1adfbde44ab4654af02fa5bddc97ff385cd864c1d8-tmp@sha256:fb1b134b9de6fead598bb496c918cfae6a9bd99ab244ebf6a38609ab875f0c1d"],"repoTags":[],"size":"1466018"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhos
t/kicbase/echo-server:functional-889777"],"size":"4943877"},{"id":"3746527ba46e7ea09a157c1881c021c0e7c1f7e58f01260099ec387e447f579d","repoDigests":["localhost/minikube-local-cache-test@sha256:c59ae7893ca07653792cc9809b28fbbc00b27c97c410b9ffb8046331a0169ba3"],"repoTags":["localhost/minikube-local-cache-test:functional-889777"],"size":"3328"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests"
:["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76
049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df","repoDigests":["docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb","docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818008"},{"id":"49b68331dccfc4426afc0f0c80141dbde7320a0d1d60e2d5ac2fb751af74297d","repoDigests":["localhost/my-image@sha256:eaf10da78ddccec6fab2b2b45f6fdfd7772db6018ee469af8169acbdb2dfe415"],"repoTags":["localhost/my-image:functional-889777"],"size":"1468599"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"]
,"size":"149009664"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889777 image ls --format json --alsologtostderr:
I1025 21:50:16.493297  679016 out.go:345] Setting OutFile to fd 1 ...
I1025 21:50:16.493492  679016 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:16.493507  679016 out.go:358] Setting ErrFile to fd 2...
I1025 21:50:16.493514  679016 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:16.493805  679016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
I1025 21:50:16.494767  679016 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:16.494943  679016 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:16.495564  679016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:16.495666  679016 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:16.510981  679016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39073
I1025 21:50:16.511498  679016 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:16.512072  679016 main.go:141] libmachine: Using API Version  1
I1025 21:50:16.512098  679016 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:16.512517  679016 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:16.512747  679016 main.go:141] libmachine: (functional-889777) Calling .GetState
I1025 21:50:16.514731  679016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:16.514787  679016 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:16.530394  679016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
I1025 21:50:16.530964  679016 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:16.531551  679016 main.go:141] libmachine: Using API Version  1
I1025 21:50:16.531595  679016 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:16.531996  679016 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:16.532227  679016 main.go:141] libmachine: (functional-889777) Calling .DriverName
I1025 21:50:16.532555  679016 ssh_runner.go:195] Run: systemctl --version
I1025 21:50:16.532596  679016 main.go:141] libmachine: (functional-889777) Calling .GetSSHHostname
I1025 21:50:16.535558  679016 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:16.535927  679016 main.go:141] libmachine: (functional-889777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:cf:9b", ip: ""} in network mk-functional-889777: {Iface:virbr1 ExpiryTime:2024-10-25 22:47:14 +0000 UTC Type:0 Mac:52:54:00:74:cf:9b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-889777 Clientid:01:52:54:00:74:cf:9b}
I1025 21:50:16.535973  679016 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:16.536103  679016 main.go:141] libmachine: (functional-889777) Calling .GetSSHPort
I1025 21:50:16.536313  679016 main.go:141] libmachine: (functional-889777) Calling .GetSSHKeyPath
I1025 21:50:16.536458  679016 main.go:141] libmachine: (functional-889777) Calling .GetSSHUsername
I1025 21:50:16.536604  679016 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/functional-889777/id_rsa Username:docker}
I1025 21:50:16.632663  679016 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 21:50:16.676357  679016 main.go:141] libmachine: Making call to close driver server
I1025 21:50:16.676380  679016 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:16.676654  679016 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:16.676674  679016 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:50:16.676679  679016 main.go:141] libmachine: (functional-889777) DBG | Closing plugin on server side
I1025 21:50:16.676692  679016 main.go:141] libmachine: Making call to close driver server
I1025 21:50:16.676703  679016 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:16.676987  679016 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:16.677004  679016 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889777 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 3b25b682ea82b2db3cc4fd48db818be788ee3f902ac7378090cf2624ec2442df
repoDigests:
- docker.io/library/nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
- docker.io/library/nginx@sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26
repoTags:
- docker.io/library/nginx:latest
size: "195818008"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-889777
size: "4943877"
- id: 3746527ba46e7ea09a157c1881c021c0e7c1f7e58f01260099ec387e447f579d
repoDigests:
- localhost/minikube-local-cache-test@sha256:c59ae7893ca07653792cc9809b28fbbc00b27c97c410b9ffb8046331a0169ba3
repoTags:
- localhost/minikube-local-cache-test:functional-889777
size: "3328"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889777 image ls --format yaml --alsologtostderr:
I1025 21:50:08.605014  678883 out.go:345] Setting OutFile to fd 1 ...
I1025 21:50:08.605121  678883 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:08.605130  678883 out.go:358] Setting ErrFile to fd 2...
I1025 21:50:08.605135  678883 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:08.605327  678883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
I1025 21:50:08.605928  678883 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:08.606032  678883 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:08.606407  678883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:08.606456  678883 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:08.622328  678883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
I1025 21:50:08.623006  678883 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:08.623685  678883 main.go:141] libmachine: Using API Version  1
I1025 21:50:08.623707  678883 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:08.624111  678883 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:08.624325  678883 main.go:141] libmachine: (functional-889777) Calling .GetState
I1025 21:50:08.626635  678883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:08.626695  678883 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:08.642508  678883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33111
I1025 21:50:08.642998  678883 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:08.643559  678883 main.go:141] libmachine: Using API Version  1
I1025 21:50:08.643591  678883 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:08.643941  678883 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:08.644159  678883 main.go:141] libmachine: (functional-889777) Calling .DriverName
I1025 21:50:08.644366  678883 ssh_runner.go:195] Run: systemctl --version
I1025 21:50:08.644402  678883 main.go:141] libmachine: (functional-889777) Calling .GetSSHHostname
I1025 21:50:08.647272  678883 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:08.647702  678883 main.go:141] libmachine: (functional-889777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:cf:9b", ip: ""} in network mk-functional-889777: {Iface:virbr1 ExpiryTime:2024-10-25 22:47:14 +0000 UTC Type:0 Mac:52:54:00:74:cf:9b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-889777 Clientid:01:52:54:00:74:cf:9b}
I1025 21:50:08.647734  678883 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:08.647862  678883 main.go:141] libmachine: (functional-889777) Calling .GetSSHPort
I1025 21:50:08.648048  678883 main.go:141] libmachine: (functional-889777) Calling .GetSSHKeyPath
I1025 21:50:08.648201  678883 main.go:141] libmachine: (functional-889777) Calling .GetSSHUsername
I1025 21:50:08.648370  678883 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/functional-889777/id_rsa Username:docker}
I1025 21:50:08.758615  678883 ssh_runner.go:195] Run: sudo crictl images --output json
I1025 21:50:08.821995  678883 main.go:141] libmachine: Making call to close driver server
I1025 21:50:08.822015  678883 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:08.822337  678883 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:08.822361  678883 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:50:08.822377  678883 main.go:141] libmachine: Making call to close driver server
I1025 21:50:08.822386  678883 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:08.822381  678883 main.go:141] libmachine: (functional-889777) DBG | Closing plugin on server side
I1025 21:50:08.822627  678883 main.go:141] libmachine: (functional-889777) DBG | Closing plugin on server side
I1025 21:50:08.822658  678883 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:08.822670  678883 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 ssh pgrep buildkitd: exit status 1 (276.16694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image build -t localhost/my-image:functional-889777 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 image build -t localhost/my-image:functional-889777 testdata/build --alsologtostderr: (7.096042119s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-889777 image build -t localhost/my-image:functional-889777 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 76451402458
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-889777
--> 49b68331dcc
Successfully tagged localhost/my-image:functional-889777
49b68331dccfc4426afc0f0c80141dbde7320a0d1d60e2d5ac2fb751af74297d
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-889777 image build -t localhost/my-image:functional-889777 testdata/build --alsologtostderr:
I1025 21:50:09.153934  678937 out.go:345] Setting OutFile to fd 1 ...
I1025 21:50:09.154082  678937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:09.154096  678937 out.go:358] Setting ErrFile to fd 2...
I1025 21:50:09.154102  678937 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 21:50:09.154303  678937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
I1025 21:50:09.154871  678937 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:09.155641  678937 config.go:182] Loaded profile config "functional-889777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1025 21:50:09.156212  678937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:09.156278  678937 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:09.171675  678937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39705
I1025 21:50:09.172272  678937 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:09.172915  678937 main.go:141] libmachine: Using API Version  1
I1025 21:50:09.172965  678937 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:09.173367  678937 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:09.173572  678937 main.go:141] libmachine: (functional-889777) Calling .GetState
I1025 21:50:09.175473  678937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1025 21:50:09.175514  678937 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:50:09.190555  678937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38565
I1025 21:50:09.191062  678937 main.go:141] libmachine: () Calling .GetVersion
I1025 21:50:09.191582  678937 main.go:141] libmachine: Using API Version  1
I1025 21:50:09.191619  678937 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:50:09.191952  678937 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:50:09.192138  678937 main.go:141] libmachine: (functional-889777) Calling .DriverName
I1025 21:50:09.192320  678937 ssh_runner.go:195] Run: systemctl --version
I1025 21:50:09.192347  678937 main.go:141] libmachine: (functional-889777) Calling .GetSSHHostname
I1025 21:50:09.195118  678937 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:09.195572  678937 main.go:141] libmachine: (functional-889777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:cf:9b", ip: ""} in network mk-functional-889777: {Iface:virbr1 ExpiryTime:2024-10-25 22:47:14 +0000 UTC Type:0 Mac:52:54:00:74:cf:9b Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-889777 Clientid:01:52:54:00:74:cf:9b}
I1025 21:50:09.195603  678937 main.go:141] libmachine: (functional-889777) DBG | domain functional-889777 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:cf:9b in network mk-functional-889777
I1025 21:50:09.195832  678937 main.go:141] libmachine: (functional-889777) Calling .GetSSHPort
I1025 21:50:09.196017  678937 main.go:141] libmachine: (functional-889777) Calling .GetSSHKeyPath
I1025 21:50:09.196176  678937 main.go:141] libmachine: (functional-889777) Calling .GetSSHUsername
I1025 21:50:09.196321  678937 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/functional-889777/id_rsa Username:docker}
I1025 21:50:09.313750  678937 build_images.go:161] Building image from path: /tmp/build.1423221843.tar
I1025 21:50:09.313836  678937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 21:50:09.347812  678937 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1423221843.tar
I1025 21:50:09.360697  678937 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1423221843.tar: stat -c "%s %y" /var/lib/minikube/build/build.1423221843.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1423221843.tar': No such file or directory
I1025 21:50:09.360750  678937 ssh_runner.go:362] scp /tmp/build.1423221843.tar --> /var/lib/minikube/build/build.1423221843.tar (3072 bytes)
I1025 21:50:09.397357  678937 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1423221843
I1025 21:50:09.409724  678937 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1423221843 -xf /var/lib/minikube/build/build.1423221843.tar
I1025 21:50:09.427932  678937 crio.go:315] Building image: /var/lib/minikube/build/build.1423221843
I1025 21:50:09.428026  678937 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-889777 /var/lib/minikube/build/build.1423221843 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1025 21:50:16.163346  678937 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-889777 /var/lib/minikube/build/build.1423221843 --cgroup-manager=cgroupfs: (6.735291042s)
I1025 21:50:16.163414  678937 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1423221843
I1025 21:50:16.176874  678937 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1423221843.tar
I1025 21:50:16.196345  678937 build_images.go:217] Built localhost/my-image:functional-889777 from /tmp/build.1423221843.tar
I1025 21:50:16.196397  678937 build_images.go:133] succeeded building to: functional-889777
I1025 21:50:16.196404  678937 build_images.go:134] failed building to: 
I1025 21:50:16.196436  678937 main.go:141] libmachine: Making call to close driver server
I1025 21:50:16.196454  678937 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:16.196762  678937 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:16.196787  678937 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:50:16.196797  678937 main.go:141] libmachine: Making call to close driver server
I1025 21:50:16.196806  678937 main.go:141] libmachine: (functional-889777) Calling .Close
I1025 21:50:16.196839  678937 main.go:141] libmachine: (functional-889777) DBG | Closing plugin on server side
I1025 21:50:16.197022  678937 main.go:141] libmachine: (functional-889777) DBG | Closing plugin on server side
I1025 21:50:16.197066  678937 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:50:16.197086  678937 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.822483931s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-889777
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image load --daemon kicbase/echo-server:functional-889777 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 image load --daemon kicbase/echo-server:functional-889777 --alsologtostderr: (1.377345445s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image load --daemon kicbase/echo-server:functional-889777 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-889777
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image load --daemon kicbase/echo-server:functional-889777 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image save kicbase/echo-server:functional-889777 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image rm kicbase/echo-server:functional-889777 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 image rm kicbase/echo-server:functional-889777 --alsologtostderr: (1.550834632s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-889777 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.641236058s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "378.38492ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "64.315248ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "365.16355ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "62.584349ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 service list -o json
functional_test.go:1494: Took "317.685169ms" to run "out/minikube-linux-amd64 -p functional-889777 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.12:32572
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdany-port111111582/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1729892993783425578" to /tmp/TestFunctionalparallelMountCmdany-port111111582/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1729892993783425578" to /tmp/TestFunctionalparallelMountCmdany-port111111582/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1729892993783425578" to /tmp/TestFunctionalparallelMountCmdany-port111111582/001/test-1729892993783425578
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.138308ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 21:49:54.068891  669177 retry.go:31] will retry after 253.599426ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 21:49 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 21:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 21:49 test-1729892993783425578
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh cat /mount-9p/test-1729892993783425578
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-889777 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8700b1a4-9303-416c-a5f7-60dc6ff684d9] Pending
helpers_test.go:344: "busybox-mount" [8700b1a4-9303-416c-a5f7-60dc6ff684d9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8700b1a4-9303-416c-a5f7-60dc6ff684d9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8700b1a4-9303-416c-a5f7-60dc6ff684d9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004354938s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-889777 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdany-port111111582/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-889777
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 image save --daemon kicbase/echo-server:functional-889777 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-889777
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.12:32572
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdspecific-port2868273086/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.753171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 21:50:02.622529  669177 retry.go:31] will retry after 484.506565ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdspecific-port2868273086/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 ssh "sudo umount -f /mount-9p": exit status 1 (285.32304ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-889777 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdspecific-port2868273086/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1362952386/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1362952386/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1362952386/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T" /mount1: exit status 1 (336.953009ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 21:50:04.649218  669177 retry.go:31] will retry after 516.906627ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-889777 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-889777 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1362952386/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1362952386/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-889777 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1362952386/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-889777
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-889777
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-889777
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-923730 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1025 21:50:50.811835  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:53:06.949301  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:53:34.654060  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-923730 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m16.357906801s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (197.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-923730 -- rollout status deployment/busybox: (6.039043199s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-fvzwv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-h7dkw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-sc4xk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-fvzwv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-h7dkw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-sc4xk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-fvzwv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-h7dkw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-sc4xk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-fvzwv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-fvzwv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-h7dkw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-h7dkw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-sc4xk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-923730 -- exec busybox-7dff88458-sc4xk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-923730 -v=7 --alsologtostderr
E1025 21:54:40.903281  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:40.909724  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:40.921078  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:40.942497  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:40.983942  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:41.065445  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:41.227755  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:41.549524  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:42.191310  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:43.473618  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:46.035716  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:54:51.157032  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-923730 -v=7 --alsologtostderr: (56.920068962s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-923730 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp testdata/cp-test.txt ha-923730:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4262181053/001/cp-test_ha-923730.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730:/home/docker/cp-test.txt ha-923730-m02:/home/docker/cp-test_ha-923730_ha-923730-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m02 "sudo cat /home/docker/cp-test_ha-923730_ha-923730-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730:/home/docker/cp-test.txt ha-923730-m03:/home/docker/cp-test_ha-923730_ha-923730-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m03 "sudo cat /home/docker/cp-test_ha-923730_ha-923730-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730:/home/docker/cp-test.txt ha-923730-m04:/home/docker/cp-test_ha-923730_ha-923730-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m04 "sudo cat /home/docker/cp-test_ha-923730_ha-923730-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp testdata/cp-test.txt ha-923730-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m02 "sudo cat /home/docker/cp-test.txt"
E1025 21:55:01.399266  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4262181053/001/cp-test_ha-923730-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m02:/home/docker/cp-test.txt ha-923730:/home/docker/cp-test_ha-923730-m02_ha-923730.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730 "sudo cat /home/docker/cp-test_ha-923730-m02_ha-923730.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m02:/home/docker/cp-test.txt ha-923730-m03:/home/docker/cp-test_ha-923730-m02_ha-923730-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m03 "sudo cat /home/docker/cp-test_ha-923730-m02_ha-923730-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m02:/home/docker/cp-test.txt ha-923730-m04:/home/docker/cp-test_ha-923730-m02_ha-923730-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m04 "sudo cat /home/docker/cp-test_ha-923730-m02_ha-923730-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp testdata/cp-test.txt ha-923730-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4262181053/001/cp-test_ha-923730-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m03:/home/docker/cp-test.txt ha-923730:/home/docker/cp-test_ha-923730-m03_ha-923730.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730 "sudo cat /home/docker/cp-test_ha-923730-m03_ha-923730.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m03:/home/docker/cp-test.txt ha-923730-m02:/home/docker/cp-test_ha-923730-m03_ha-923730-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m02 "sudo cat /home/docker/cp-test_ha-923730-m03_ha-923730-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m03:/home/docker/cp-test.txt ha-923730-m04:/home/docker/cp-test_ha-923730-m03_ha-923730-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m04 "sudo cat /home/docker/cp-test_ha-923730-m03_ha-923730-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp testdata/cp-test.txt ha-923730-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4262181053/001/cp-test_ha-923730-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m04:/home/docker/cp-test.txt ha-923730:/home/docker/cp-test_ha-923730-m04_ha-923730.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730 "sudo cat /home/docker/cp-test_ha-923730-m04_ha-923730.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m04:/home/docker/cp-test.txt ha-923730-m02:/home/docker/cp-test_ha-923730-m04_ha-923730-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m02 "sudo cat /home/docker/cp-test_ha-923730-m04_ha-923730-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 cp ha-923730-m04:/home/docker/cp-test.txt ha-923730-m03:/home/docker/cp-test_ha-923730-m04_ha-923730-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 ssh -n ha-923730-m03 "sudo cat /home/docker/cp-test_ha-923730-m04_ha-923730-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 node stop m02 -v=7 --alsologtostderr
E1025 21:55:21.881127  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:56:02.843102  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-923730 node stop m02 -v=7 --alsologtostderr: (1m30.988213991s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr: exit status 7 (655.845893ms)

                                                
                                                
-- stdout --
	ha-923730
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-923730-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-923730-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-923730-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:56:41.407334  683791 out.go:345] Setting OutFile to fd 1 ...
	I1025 21:56:41.407498  683791 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:56:41.407508  683791 out.go:358] Setting ErrFile to fd 2...
	I1025 21:56:41.407515  683791 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 21:56:41.407693  683791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 21:56:41.407884  683791 out.go:352] Setting JSON to false
	I1025 21:56:41.407933  683791 mustload.go:65] Loading cluster: ha-923730
	I1025 21:56:41.408032  683791 notify.go:220] Checking for updates...
	I1025 21:56:41.408369  683791 config.go:182] Loaded profile config "ha-923730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 21:56:41.408396  683791 status.go:174] checking status of ha-923730 ...
	I1025 21:56:41.408886  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.408992  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.427049  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43379
	I1025 21:56:41.427594  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.428148  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.428171  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.428630  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.428802  683791 main.go:141] libmachine: (ha-923730) Calling .GetState
	I1025 21:56:41.430577  683791 status.go:371] ha-923730 host status = "Running" (err=<nil>)
	I1025 21:56:41.430594  683791 host.go:66] Checking if "ha-923730" exists ...
	I1025 21:56:41.430941  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.430993  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.447168  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I1025 21:56:41.447662  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.448188  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.448211  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.448558  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.448772  683791 main.go:141] libmachine: (ha-923730) Calling .GetIP
	I1025 21:56:41.451405  683791 main.go:141] libmachine: (ha-923730) DBG | domain ha-923730 has defined MAC address 52:54:00:29:ae:61 in network mk-ha-923730
	I1025 21:56:41.451890  683791 main.go:141] libmachine: (ha-923730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:ae:61", ip: ""} in network mk-ha-923730: {Iface:virbr1 ExpiryTime:2024-10-25 22:50:46 +0000 UTC Type:0 Mac:52:54:00:29:ae:61 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-923730 Clientid:01:52:54:00:29:ae:61}
	I1025 21:56:41.451915  683791 main.go:141] libmachine: (ha-923730) DBG | domain ha-923730 has defined IP address 192.168.39.56 and MAC address 52:54:00:29:ae:61 in network mk-ha-923730
	I1025 21:56:41.451977  683791 host.go:66] Checking if "ha-923730" exists ...
	I1025 21:56:41.452317  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.452360  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.467735  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I1025 21:56:41.468287  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.468883  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.468911  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.469323  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.469548  683791 main.go:141] libmachine: (ha-923730) Calling .DriverName
	I1025 21:56:41.469781  683791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:56:41.469826  683791 main.go:141] libmachine: (ha-923730) Calling .GetSSHHostname
	I1025 21:56:41.472830  683791 main.go:141] libmachine: (ha-923730) DBG | domain ha-923730 has defined MAC address 52:54:00:29:ae:61 in network mk-ha-923730
	I1025 21:56:41.473393  683791 main.go:141] libmachine: (ha-923730) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:ae:61", ip: ""} in network mk-ha-923730: {Iface:virbr1 ExpiryTime:2024-10-25 22:50:46 +0000 UTC Type:0 Mac:52:54:00:29:ae:61 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-923730 Clientid:01:52:54:00:29:ae:61}
	I1025 21:56:41.473433  683791 main.go:141] libmachine: (ha-923730) DBG | domain ha-923730 has defined IP address 192.168.39.56 and MAC address 52:54:00:29:ae:61 in network mk-ha-923730
	I1025 21:56:41.473521  683791 main.go:141] libmachine: (ha-923730) Calling .GetSSHPort
	I1025 21:56:41.473676  683791 main.go:141] libmachine: (ha-923730) Calling .GetSSHKeyPath
	I1025 21:56:41.473828  683791 main.go:141] libmachine: (ha-923730) Calling .GetSSHUsername
	I1025 21:56:41.474044  683791 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/ha-923730/id_rsa Username:docker}
	I1025 21:56:41.558421  683791 ssh_runner.go:195] Run: systemctl --version
	I1025 21:56:41.565295  683791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:56:41.584118  683791 kubeconfig.go:125] found "ha-923730" server: "https://192.168.39.254:8443"
	I1025 21:56:41.584156  683791 api_server.go:166] Checking apiserver status ...
	I1025 21:56:41.584190  683791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:56:41.601432  683791 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1090/cgroup
	W1025 21:56:41.611667  683791 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1090/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 21:56:41.611743  683791 ssh_runner.go:195] Run: ls
	I1025 21:56:41.616526  683791 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1025 21:56:41.620863  683791 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1025 21:56:41.620888  683791 status.go:463] ha-923730 apiserver status = Running (err=<nil>)
	I1025 21:56:41.620902  683791 status.go:176] ha-923730 status: &{Name:ha-923730 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 21:56:41.620928  683791 status.go:174] checking status of ha-923730-m02 ...
	I1025 21:56:41.621263  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.621308  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.636531  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42559
	I1025 21:56:41.637046  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.637549  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.637570  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.637905  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.638096  683791 main.go:141] libmachine: (ha-923730-m02) Calling .GetState
	I1025 21:56:41.639635  683791 status.go:371] ha-923730-m02 host status = "Stopped" (err=<nil>)
	I1025 21:56:41.639649  683791 status.go:384] host is not running, skipping remaining checks
	I1025 21:56:41.639655  683791 status.go:176] ha-923730-m02 status: &{Name:ha-923730-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 21:56:41.639672  683791 status.go:174] checking status of ha-923730-m03 ...
	I1025 21:56:41.639955  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.640010  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.654696  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I1025 21:56:41.655156  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.655717  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.655741  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.656081  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.656281  683791 main.go:141] libmachine: (ha-923730-m03) Calling .GetState
	I1025 21:56:41.657968  683791 status.go:371] ha-923730-m03 host status = "Running" (err=<nil>)
	I1025 21:56:41.657984  683791 host.go:66] Checking if "ha-923730-m03" exists ...
	I1025 21:56:41.658362  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.658402  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.673114  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41707
	I1025 21:56:41.673547  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.674015  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.674037  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.674372  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.674554  683791 main.go:141] libmachine: (ha-923730-m03) Calling .GetIP
	I1025 21:56:41.677767  683791 main.go:141] libmachine: (ha-923730-m03) DBG | domain ha-923730-m03 has defined MAC address 52:54:00:17:3c:d4 in network mk-ha-923730
	I1025 21:56:41.678216  683791 main.go:141] libmachine: (ha-923730-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:3c:d4", ip: ""} in network mk-ha-923730: {Iface:virbr1 ExpiryTime:2024-10-25 22:52:46 +0000 UTC Type:0 Mac:52:54:00:17:3c:d4 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-923730-m03 Clientid:01:52:54:00:17:3c:d4}
	I1025 21:56:41.678249  683791 main.go:141] libmachine: (ha-923730-m03) DBG | domain ha-923730-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:17:3c:d4 in network mk-ha-923730
	I1025 21:56:41.678430  683791 host.go:66] Checking if "ha-923730-m03" exists ...
	I1025 21:56:41.678853  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.678900  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.695099  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35997
	I1025 21:56:41.695613  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.696106  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.696134  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.696466  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.696621  683791 main.go:141] libmachine: (ha-923730-m03) Calling .DriverName
	I1025 21:56:41.696786  683791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:56:41.696805  683791 main.go:141] libmachine: (ha-923730-m03) Calling .GetSSHHostname
	I1025 21:56:41.699280  683791 main.go:141] libmachine: (ha-923730-m03) DBG | domain ha-923730-m03 has defined MAC address 52:54:00:17:3c:d4 in network mk-ha-923730
	I1025 21:56:41.699650  683791 main.go:141] libmachine: (ha-923730-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:3c:d4", ip: ""} in network mk-ha-923730: {Iface:virbr1 ExpiryTime:2024-10-25 22:52:46 +0000 UTC Type:0 Mac:52:54:00:17:3c:d4 Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:ha-923730-m03 Clientid:01:52:54:00:17:3c:d4}
	I1025 21:56:41.699679  683791 main.go:141] libmachine: (ha-923730-m03) DBG | domain ha-923730-m03 has defined IP address 192.168.39.111 and MAC address 52:54:00:17:3c:d4 in network mk-ha-923730
	I1025 21:56:41.699820  683791 main.go:141] libmachine: (ha-923730-m03) Calling .GetSSHPort
	I1025 21:56:41.699978  683791 main.go:141] libmachine: (ha-923730-m03) Calling .GetSSHKeyPath
	I1025 21:56:41.700123  683791 main.go:141] libmachine: (ha-923730-m03) Calling .GetSSHUsername
	I1025 21:56:41.700239  683791 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/ha-923730-m03/id_rsa Username:docker}
	I1025 21:56:41.785893  683791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:56:41.806964  683791 kubeconfig.go:125] found "ha-923730" server: "https://192.168.39.254:8443"
	I1025 21:56:41.807003  683791 api_server.go:166] Checking apiserver status ...
	I1025 21:56:41.807051  683791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:56:41.824244  683791 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W1025 21:56:41.837172  683791 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 21:56:41.837238  683791 ssh_runner.go:195] Run: ls
	I1025 21:56:41.842236  683791 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1025 21:56:41.846761  683791 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1025 21:56:41.846787  683791 status.go:463] ha-923730-m03 apiserver status = Running (err=<nil>)
	I1025 21:56:41.846796  683791 status.go:176] ha-923730-m03 status: &{Name:ha-923730-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 21:56:41.846812  683791 status.go:174] checking status of ha-923730-m04 ...
	I1025 21:56:41.847246  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.847298  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.863313  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I1025 21:56:41.863835  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.864338  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.864359  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.864681  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.864852  683791 main.go:141] libmachine: (ha-923730-m04) Calling .GetState
	I1025 21:56:41.866348  683791 status.go:371] ha-923730-m04 host status = "Running" (err=<nil>)
	I1025 21:56:41.866364  683791 host.go:66] Checking if "ha-923730-m04" exists ...
	I1025 21:56:41.866796  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.866845  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.881889  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
	I1025 21:56:41.882359  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.882893  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.882919  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.883244  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.883434  683791 main.go:141] libmachine: (ha-923730-m04) Calling .GetIP
	I1025 21:56:41.886149  683791 main.go:141] libmachine: (ha-923730-m04) DBG | domain ha-923730-m04 has defined MAC address 52:54:00:8b:b9:12 in network mk-ha-923730
	I1025 21:56:41.886517  683791 main.go:141] libmachine: (ha-923730-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:b9:12", ip: ""} in network mk-ha-923730: {Iface:virbr1 ExpiryTime:2024-10-25 22:54:14 +0000 UTC Type:0 Mac:52:54:00:8b:b9:12 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-923730-m04 Clientid:01:52:54:00:8b:b9:12}
	I1025 21:56:41.886541  683791 main.go:141] libmachine: (ha-923730-m04) DBG | domain ha-923730-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:8b:b9:12 in network mk-ha-923730
	I1025 21:56:41.886650  683791 host.go:66] Checking if "ha-923730-m04" exists ...
	I1025 21:56:41.886926  683791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 21:56:41.886983  683791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:41.901919  683791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I1025 21:56:41.902314  683791 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:41.902793  683791 main.go:141] libmachine: Using API Version  1
	I1025 21:56:41.902825  683791 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:41.903140  683791 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:41.903349  683791 main.go:141] libmachine: (ha-923730-m04) Calling .DriverName
	I1025 21:56:41.903572  683791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:56:41.903599  683791 main.go:141] libmachine: (ha-923730-m04) Calling .GetSSHHostname
	I1025 21:56:41.906391  683791 main.go:141] libmachine: (ha-923730-m04) DBG | domain ha-923730-m04 has defined MAC address 52:54:00:8b:b9:12 in network mk-ha-923730
	I1025 21:56:41.906770  683791 main.go:141] libmachine: (ha-923730-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:b9:12", ip: ""} in network mk-ha-923730: {Iface:virbr1 ExpiryTime:2024-10-25 22:54:14 +0000 UTC Type:0 Mac:52:54:00:8b:b9:12 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-923730-m04 Clientid:01:52:54:00:8b:b9:12}
	I1025 21:56:41.906796  683791 main.go:141] libmachine: (ha-923730-m04) DBG | domain ha-923730-m04 has defined IP address 192.168.39.216 and MAC address 52:54:00:8b:b9:12 in network mk-ha-923730
	I1025 21:56:41.906934  683791 main.go:141] libmachine: (ha-923730-m04) Calling .GetSSHPort
	I1025 21:56:41.907111  683791 main.go:141] libmachine: (ha-923730-m04) Calling .GetSSHKeyPath
	I1025 21:56:41.907255  683791 main.go:141] libmachine: (ha-923730-m04) Calling .GetSSHUsername
	I1025 21:56:41.907403  683791 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/ha-923730-m04/id_rsa Username:docker}
	I1025 21:56:41.994038  683791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:56:42.011945  683791 status.go:176] ha-923730-m04 status: &{Name:ha-923730-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 node start m02 -v=7 --alsologtostderr
E1025 21:57:24.765261  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-923730 node start m02 -v=7 --alsologtostderr: (51.626511586s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (52.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (431.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-923730 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-923730 -v=7 --alsologtostderr
E1025 21:58:06.945989  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 21:59:40.903023  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:00:08.607140  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-923730 -v=7 --alsologtostderr: (4m34.099453171s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-923730 --wait=true -v=7 --alsologtostderr
E1025 22:03:06.946297  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:04:30.016124  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:04:40.902595  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-923730 --wait=true -v=7 --alsologtostderr: (2m36.953600529s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-923730
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (431.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-923730 node delete m03 -v=7 --alsologtostderr: (16.165449326s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 stop -v=7 --alsologtostderr
E1025 22:08:06.946778  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-923730 stop -v=7 --alsologtostderr: (4m32.839044771s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr: exit status 7 (113.715905ms)

                                                
                                                
-- stdout --
	ha-923730
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-923730-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-923730-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:09:37.655342  687918 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:09:37.655623  687918 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:09:37.655633  687918 out.go:358] Setting ErrFile to fd 2...
	I1025 22:09:37.655637  687918 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:09:37.655852  687918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:09:37.656007  687918 out.go:352] Setting JSON to false
	I1025 22:09:37.656034  687918 mustload.go:65] Loading cluster: ha-923730
	I1025 22:09:37.656144  687918 notify.go:220] Checking for updates...
	I1025 22:09:37.656399  687918 config.go:182] Loaded profile config "ha-923730": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:09:37.656420  687918 status.go:174] checking status of ha-923730 ...
	I1025 22:09:37.657621  687918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:09:37.657730  687918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:09:37.679823  687918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42347
	I1025 22:09:37.680340  687918 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:09:37.680986  687918 main.go:141] libmachine: Using API Version  1
	I1025 22:09:37.681011  687918 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:09:37.681458  687918 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:09:37.681670  687918 main.go:141] libmachine: (ha-923730) Calling .GetState
	I1025 22:09:37.683384  687918 status.go:371] ha-923730 host status = "Stopped" (err=<nil>)
	I1025 22:09:37.683400  687918 status.go:384] host is not running, skipping remaining checks
	I1025 22:09:37.683407  687918 status.go:176] ha-923730 status: &{Name:ha-923730 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 22:09:37.683457  687918 status.go:174] checking status of ha-923730-m02 ...
	I1025 22:09:37.683770  687918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:09:37.683821  687918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:09:37.698531  687918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33525
	I1025 22:09:37.698981  687918 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:09:37.699402  687918 main.go:141] libmachine: Using API Version  1
	I1025 22:09:37.699421  687918 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:09:37.699750  687918 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:09:37.699934  687918 main.go:141] libmachine: (ha-923730-m02) Calling .GetState
	I1025 22:09:37.701494  687918 status.go:371] ha-923730-m02 host status = "Stopped" (err=<nil>)
	I1025 22:09:37.701510  687918 status.go:384] host is not running, skipping remaining checks
	I1025 22:09:37.701517  687918 status.go:176] ha-923730-m02 status: &{Name:ha-923730-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 22:09:37.701555  687918 status.go:174] checking status of ha-923730-m04 ...
	I1025 22:09:37.701869  687918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:09:37.701913  687918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:09:37.716528  687918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43151
	I1025 22:09:37.716991  687918 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:09:37.717479  687918 main.go:141] libmachine: Using API Version  1
	I1025 22:09:37.717499  687918 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:09:37.717790  687918 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:09:37.717965  687918 main.go:141] libmachine: (ha-923730-m04) Calling .GetState
	I1025 22:09:37.719495  687918 status.go:371] ha-923730-m04 host status = "Stopped" (err=<nil>)
	I1025 22:09:37.719507  687918 status.go:384] host is not running, skipping remaining checks
	I1025 22:09:37.719514  687918 status.go:176] ha-923730-m04 status: &{Name:ha-923730-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (124.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-923730 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1025 22:09:40.902791  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:11:03.969309  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-923730 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m3.894818576s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (124.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-923730 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-923730 --control-plane -v=7 --alsologtostderr: (1m17.46255093s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-923730 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-504258 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1025 22:13:06.946872  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-504258 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.263951815s)
--- PASS: TestJSONOutput/start/Command (55.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-504258 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-504258 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-504258 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-504258 --output=json --user=testUser: (7.353151574s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-368958 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-368958 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.122153ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"80290ac2-4a3b-49ce-81b1-6b9049a23edb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-368958] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"716ab8a1-a82f-4945-a2bf-6cd682353243","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19758"}}
	{"specversion":"1.0","id":"a8aee5c5-f21e-40a3-88b3-3afadeb34910","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cb9a9c18-4c1f-4aa5-ae5d-bd6b1079e7dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig"}}
	{"specversion":"1.0","id":"574442b2-5fbc-4fa1-be36-7eff48d97069","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube"}}
	{"specversion":"1.0","id":"5294ae62-46d1-49b1-9e43-1c142f2ddbff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9ebf4bd7-d425-484a-8852-5588c0fb093b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b40f2def-4db6-4031-bf68-db0b90e66060","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-368958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-368958
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-721517 --driver=kvm2  --container-runtime=crio
E1025 22:14:40.902535  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-721517 --driver=kvm2  --container-runtime=crio: (44.76627313s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-732755 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-732755 --driver=kvm2  --container-runtime=crio: (45.51329576s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-721517
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-732755
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-732755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-732755
helpers_test.go:175: Cleaning up "first-721517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-721517
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-721517: (1.01131951s)
--- PASS: TestMinikubeProfile (93.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-231786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-231786 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.551751797s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-231786 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-231786 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-248246 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-248246 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.507872868s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-248246 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-248246 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-231786 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-248246 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-248246 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-248246
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-248246: (1.327562688s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-248246
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-248246: (22.177595362s)
--- PASS: TestMountStart/serial/RestartStopped (23.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-248246 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-248246 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-511849 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1025 22:18:06.946275  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-511849 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.425971134s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-511849 -- rollout status deployment/busybox: (3.823188931s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-99d7s -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-xkh6z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-99d7s -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-xkh6z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-99d7s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-xkh6z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-99d7s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-99d7s -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-xkh6z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-511849 -- exec busybox-7dff88458-xkh6z -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-511849 -v 3 --alsologtostderr
E1025 22:19:40.902788  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-511849 -v 3 --alsologtostderr: (49.509946864s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-511849 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp testdata/cp-test.txt multinode-511849:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp multinode-511849:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2075670453/001/cp-test_multinode-511849.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp multinode-511849:/home/docker/cp-test.txt multinode-511849-m02:/home/docker/cp-test_multinode-511849_multinode-511849-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m02 "sudo cat /home/docker/cp-test_multinode-511849_multinode-511849-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp multinode-511849:/home/docker/cp-test.txt multinode-511849-m03:/home/docker/cp-test_multinode-511849_multinode-511849-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m03 "sudo cat /home/docker/cp-test_multinode-511849_multinode-511849-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp testdata/cp-test.txt multinode-511849-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp multinode-511849-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2075670453/001/cp-test_multinode-511849-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp multinode-511849-m02:/home/docker/cp-test.txt multinode-511849:/home/docker/cp-test_multinode-511849-m02_multinode-511849.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849 "sudo cat /home/docker/cp-test_multinode-511849-m02_multinode-511849.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp multinode-511849-m02:/home/docker/cp-test.txt multinode-511849-m03:/home/docker/cp-test_multinode-511849-m02_multinode-511849-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m03 "sudo cat /home/docker/cp-test_multinode-511849-m02_multinode-511849-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp testdata/cp-test.txt multinode-511849-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp multinode-511849-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2075670453/001/cp-test_multinode-511849-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp multinode-511849-m03:/home/docker/cp-test.txt multinode-511849:/home/docker/cp-test_multinode-511849-m03_multinode-511849.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849 "sudo cat /home/docker/cp-test_multinode-511849-m03_multinode-511849.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 cp multinode-511849-m03:/home/docker/cp-test.txt multinode-511849-m02:/home/docker/cp-test_multinode-511849-m03_multinode-511849-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 ssh -n multinode-511849-m02 "sudo cat /home/docker/cp-test_multinode-511849-m03_multinode-511849-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-511849 node stop m03: (1.413465035s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-511849 status: exit status 7 (431.181153ms)

                                                
                                                
-- stdout --
	multinode-511849
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-511849-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-511849-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-511849 status --alsologtostderr: exit status 7 (429.551636ms)

                                                
                                                
-- stdout --
	multinode-511849
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-511849-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-511849-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:20:11.101168  696081 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:20:11.101285  696081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:20:11.101294  696081 out.go:358] Setting ErrFile to fd 2...
	I1025 22:20:11.101298  696081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:20:11.101513  696081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:20:11.101700  696081 out.go:352] Setting JSON to false
	I1025 22:20:11.101730  696081 mustload.go:65] Loading cluster: multinode-511849
	I1025 22:20:11.101861  696081 notify.go:220] Checking for updates...
	I1025 22:20:11.102304  696081 config.go:182] Loaded profile config "multinode-511849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:20:11.102335  696081 status.go:174] checking status of multinode-511849 ...
	I1025 22:20:11.102841  696081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:20:11.102892  696081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:20:11.120497  696081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41323
	I1025 22:20:11.120984  696081 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:20:11.121579  696081 main.go:141] libmachine: Using API Version  1
	I1025 22:20:11.121595  696081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:20:11.121906  696081 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:20:11.122075  696081 main.go:141] libmachine: (multinode-511849) Calling .GetState
	I1025 22:20:11.123621  696081 status.go:371] multinode-511849 host status = "Running" (err=<nil>)
	I1025 22:20:11.123640  696081 host.go:66] Checking if "multinode-511849" exists ...
	I1025 22:20:11.123942  696081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:20:11.123981  696081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:20:11.139028  696081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37809
	I1025 22:20:11.139471  696081 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:20:11.139940  696081 main.go:141] libmachine: Using API Version  1
	I1025 22:20:11.139961  696081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:20:11.140277  696081 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:20:11.140450  696081 main.go:141] libmachine: (multinode-511849) Calling .GetIP
	I1025 22:20:11.143186  696081 main.go:141] libmachine: (multinode-511849) DBG | domain multinode-511849 has defined MAC address 52:54:00:6b:ab:c6 in network mk-multinode-511849
	I1025 22:20:11.143605  696081 main.go:141] libmachine: (multinode-511849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ab:c6", ip: ""} in network mk-multinode-511849: {Iface:virbr1 ExpiryTime:2024-10-25 23:17:23 +0000 UTC Type:0 Mac:52:54:00:6b:ab:c6 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-511849 Clientid:01:52:54:00:6b:ab:c6}
	I1025 22:20:11.143629  696081 main.go:141] libmachine: (multinode-511849) DBG | domain multinode-511849 has defined IP address 192.168.39.46 and MAC address 52:54:00:6b:ab:c6 in network mk-multinode-511849
	I1025 22:20:11.143738  696081 host.go:66] Checking if "multinode-511849" exists ...
	I1025 22:20:11.144021  696081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:20:11.144060  696081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:20:11.159994  696081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44467
	I1025 22:20:11.160500  696081 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:20:11.161072  696081 main.go:141] libmachine: Using API Version  1
	I1025 22:20:11.161099  696081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:20:11.161439  696081 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:20:11.161617  696081 main.go:141] libmachine: (multinode-511849) Calling .DriverName
	I1025 22:20:11.161789  696081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 22:20:11.161820  696081 main.go:141] libmachine: (multinode-511849) Calling .GetSSHHostname
	I1025 22:20:11.164855  696081 main.go:141] libmachine: (multinode-511849) DBG | domain multinode-511849 has defined MAC address 52:54:00:6b:ab:c6 in network mk-multinode-511849
	I1025 22:20:11.165317  696081 main.go:141] libmachine: (multinode-511849) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ab:c6", ip: ""} in network mk-multinode-511849: {Iface:virbr1 ExpiryTime:2024-10-25 23:17:23 +0000 UTC Type:0 Mac:52:54:00:6b:ab:c6 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-511849 Clientid:01:52:54:00:6b:ab:c6}
	I1025 22:20:11.165342  696081 main.go:141] libmachine: (multinode-511849) DBG | domain multinode-511849 has defined IP address 192.168.39.46 and MAC address 52:54:00:6b:ab:c6 in network mk-multinode-511849
	I1025 22:20:11.165484  696081 main.go:141] libmachine: (multinode-511849) Calling .GetSSHPort
	I1025 22:20:11.165618  696081 main.go:141] libmachine: (multinode-511849) Calling .GetSSHKeyPath
	I1025 22:20:11.165774  696081 main.go:141] libmachine: (multinode-511849) Calling .GetSSHUsername
	I1025 22:20:11.165929  696081 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/multinode-511849/id_rsa Username:docker}
	I1025 22:20:11.249025  696081 ssh_runner.go:195] Run: systemctl --version
	I1025 22:20:11.255069  696081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:20:11.270633  696081 kubeconfig.go:125] found "multinode-511849" server: "https://192.168.39.46:8443"
	I1025 22:20:11.270670  696081 api_server.go:166] Checking apiserver status ...
	I1025 22:20:11.270706  696081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 22:20:11.283903  696081 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup
	W1025 22:20:11.294250  696081 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1110/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:20:11.294347  696081 ssh_runner.go:195] Run: ls
	I1025 22:20:11.298802  696081 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I1025 22:20:11.302989  696081 api_server.go:279] https://192.168.39.46:8443/healthz returned 200:
	ok
	I1025 22:20:11.303012  696081 status.go:463] multinode-511849 apiserver status = Running (err=<nil>)
	I1025 22:20:11.303022  696081 status.go:176] multinode-511849 status: &{Name:multinode-511849 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 22:20:11.303045  696081 status.go:174] checking status of multinode-511849-m02 ...
	I1025 22:20:11.303377  696081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:20:11.303415  696081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:20:11.319041  696081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38951
	I1025 22:20:11.319551  696081 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:20:11.320063  696081 main.go:141] libmachine: Using API Version  1
	I1025 22:20:11.320081  696081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:20:11.320409  696081 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:20:11.320588  696081 main.go:141] libmachine: (multinode-511849-m02) Calling .GetState
	I1025 22:20:11.322088  696081 status.go:371] multinode-511849-m02 host status = "Running" (err=<nil>)
	I1025 22:20:11.322108  696081 host.go:66] Checking if "multinode-511849-m02" exists ...
	I1025 22:20:11.322384  696081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:20:11.322423  696081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:20:11.337949  696081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I1025 22:20:11.338418  696081 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:20:11.338861  696081 main.go:141] libmachine: Using API Version  1
	I1025 22:20:11.338883  696081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:20:11.339179  696081 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:20:11.339337  696081 main.go:141] libmachine: (multinode-511849-m02) Calling .GetIP
	I1025 22:20:11.341872  696081 main.go:141] libmachine: (multinode-511849-m02) DBG | domain multinode-511849-m02 has defined MAC address 52:54:00:3c:df:d0 in network mk-multinode-511849
	I1025 22:20:11.342247  696081 main.go:141] libmachine: (multinode-511849-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:df:d0", ip: ""} in network mk-multinode-511849: {Iface:virbr1 ExpiryTime:2024-10-25 23:18:28 +0000 UTC Type:0 Mac:52:54:00:3c:df:d0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-511849-m02 Clientid:01:52:54:00:3c:df:d0}
	I1025 22:20:11.342284  696081 main.go:141] libmachine: (multinode-511849-m02) DBG | domain multinode-511849-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:3c:df:d0 in network mk-multinode-511849
	I1025 22:20:11.342438  696081 host.go:66] Checking if "multinode-511849-m02" exists ...
	I1025 22:20:11.342741  696081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:20:11.342778  696081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:20:11.357987  696081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I1025 22:20:11.358451  696081 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:20:11.358896  696081 main.go:141] libmachine: Using API Version  1
	I1025 22:20:11.358920  696081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:20:11.359262  696081 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:20:11.359463  696081 main.go:141] libmachine: (multinode-511849-m02) Calling .DriverName
	I1025 22:20:11.359729  696081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 22:20:11.359754  696081 main.go:141] libmachine: (multinode-511849-m02) Calling .GetSSHHostname
	I1025 22:20:11.362762  696081 main.go:141] libmachine: (multinode-511849-m02) DBG | domain multinode-511849-m02 has defined MAC address 52:54:00:3c:df:d0 in network mk-multinode-511849
	I1025 22:20:11.363166  696081 main.go:141] libmachine: (multinode-511849-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:df:d0", ip: ""} in network mk-multinode-511849: {Iface:virbr1 ExpiryTime:2024-10-25 23:18:28 +0000 UTC Type:0 Mac:52:54:00:3c:df:d0 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-511849-m02 Clientid:01:52:54:00:3c:df:d0}
	I1025 22:20:11.363196  696081 main.go:141] libmachine: (multinode-511849-m02) DBG | domain multinode-511849-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:3c:df:d0 in network mk-multinode-511849
	I1025 22:20:11.363326  696081 main.go:141] libmachine: (multinode-511849-m02) Calling .GetSSHPort
	I1025 22:20:11.363545  696081 main.go:141] libmachine: (multinode-511849-m02) Calling .GetSSHKeyPath
	I1025 22:20:11.363682  696081 main.go:141] libmachine: (multinode-511849-m02) Calling .GetSSHUsername
	I1025 22:20:11.363808  696081 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19758-661979/.minikube/machines/multinode-511849-m02/id_rsa Username:docker}
	I1025 22:20:11.445039  696081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:20:11.461015  696081 status.go:176] multinode-511849-m02 status: &{Name:multinode-511849-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 22:20:11.461052  696081 status.go:174] checking status of multinode-511849-m03 ...
	I1025 22:20:11.461407  696081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:20:11.461460  696081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:20:11.477320  696081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I1025 22:20:11.477769  696081 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:20:11.478257  696081 main.go:141] libmachine: Using API Version  1
	I1025 22:20:11.478279  696081 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:20:11.478632  696081 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:20:11.478814  696081 main.go:141] libmachine: (multinode-511849-m03) Calling .GetState
	I1025 22:20:11.480251  696081 status.go:371] multinode-511849-m03 host status = "Stopped" (err=<nil>)
	I1025 22:20:11.480266  696081 status.go:384] host is not running, skipping remaining checks
	I1025 22:20:11.480271  696081 status.go:176] multinode-511849-m03 status: &{Name:multinode-511849-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-511849 node start m03 -v=7 --alsologtostderr: (38.845296919s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (344.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-511849
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-511849
E1025 22:21:10.018490  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:23:06.951246  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-511849: (3m3.095584327s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-511849 --wait=true -v=8 --alsologtostderr
E1025 22:24:40.902307  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-511849 --wait=true -v=8 --alsologtostderr: (2m40.945521808s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-511849
--- PASS: TestMultiNode/serial/RestartKeepsNodes (344.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-511849 node delete m03: (1.713901327s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 stop
E1025 22:27:43.973057  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:28:06.951239  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-511849 stop: (3m1.498831971s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-511849 status: exit status 7 (94.696724ms)

                                                
                                                
-- stdout --
	multinode-511849
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-511849-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-511849 status --alsologtostderr: exit status 7 (90.174313ms)

                                                
                                                
-- stdout --
	multinode-511849
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-511849-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:29:38.977242  699112 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:29:38.977371  699112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:29:38.977384  699112 out.go:358] Setting ErrFile to fd 2...
	I1025 22:29:38.977388  699112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:29:38.977561  699112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:29:38.977763  699112 out.go:352] Setting JSON to false
	I1025 22:29:38.977791  699112 mustload.go:65] Loading cluster: multinode-511849
	I1025 22:29:38.977905  699112 notify.go:220] Checking for updates...
	I1025 22:29:38.978172  699112 config.go:182] Loaded profile config "multinode-511849": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:29:38.978191  699112 status.go:174] checking status of multinode-511849 ...
	I1025 22:29:38.978616  699112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:29:38.978672  699112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:29:38.999124  699112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I1025 22:29:38.999531  699112 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:29:39.000023  699112 main.go:141] libmachine: Using API Version  1
	I1025 22:29:39.000045  699112 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:29:39.000419  699112 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:29:39.000631  699112 main.go:141] libmachine: (multinode-511849) Calling .GetState
	I1025 22:29:39.002158  699112 status.go:371] multinode-511849 host status = "Stopped" (err=<nil>)
	I1025 22:29:39.002174  699112 status.go:384] host is not running, skipping remaining checks
	I1025 22:29:39.002181  699112 status.go:176] multinode-511849 status: &{Name:multinode-511849 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 22:29:39.002218  699112 status.go:174] checking status of multinode-511849-m02 ...
	I1025 22:29:39.002527  699112 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1025 22:29:39.002581  699112 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:29:39.017293  699112 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43489
	I1025 22:29:39.017740  699112 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:29:39.018183  699112 main.go:141] libmachine: Using API Version  1
	I1025 22:29:39.018209  699112 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:29:39.018566  699112 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:29:39.018782  699112 main.go:141] libmachine: (multinode-511849-m02) Calling .GetState
	I1025 22:29:39.020173  699112 status.go:371] multinode-511849-m02 host status = "Stopped" (err=<nil>)
	I1025 22:29:39.020187  699112 status.go:384] host is not running, skipping remaining checks
	I1025 22:29:39.020195  699112 status.go:176] multinode-511849-m02 status: &{Name:multinode-511849-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (113.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-511849 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1025 22:29:40.902988  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-511849 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.276516392s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-511849 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (113.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-511849
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-511849-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-511849-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (65.796884ms)

                                                
                                                
-- stdout --
	* [multinode-511849-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19758
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-511849-m02' is duplicated with machine name 'multinode-511849-m02' in profile 'multinode-511849'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-511849-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-511849-m03 --driver=kvm2  --container-runtime=crio: (43.215666655s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-511849
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-511849: exit status 80 (224.479171ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-511849 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-511849-m03 already exists in multinode-511849-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-511849-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.37s)

                                                
                                    
x
+
TestScheduledStopUnix (112.44s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-738341 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-738341 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.803451242s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-738341 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-738341 -n scheduled-stop-738341
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-738341 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1025 22:35:52.044076  669177 retry.go:31] will retry after 142.068µs: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.045255  669177 retry.go:31] will retry after 206.002µs: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.046397  669177 retry.go:31] will retry after 268.897µs: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.047552  669177 retry.go:31] will retry after 317.849µs: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.048684  669177 retry.go:31] will retry after 644.711µs: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.049803  669177 retry.go:31] will retry after 582.394µs: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.050920  669177 retry.go:31] will retry after 1.308747ms: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.053125  669177 retry.go:31] will retry after 1.93835ms: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.055346  669177 retry.go:31] will retry after 3.800983ms: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.059550  669177 retry.go:31] will retry after 5.37265ms: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.065751  669177 retry.go:31] will retry after 3.245406ms: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.069948  669177 retry.go:31] will retry after 9.746381ms: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.080183  669177 retry.go:31] will retry after 18.790989ms: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.099479  669177 retry.go:31] will retry after 28.831647ms: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
I1025 22:35:52.128767  669177 retry.go:31] will retry after 16.39361ms: open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/scheduled-stop-738341/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-738341 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-738341 -n scheduled-stop-738341
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-738341
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-738341 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-738341
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-738341: exit status 7 (68.106316ms)

                                                
                                                
-- stdout --
	scheduled-stop-738341
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-738341 -n scheduled-stop-738341
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-738341 -n scheduled-stop-738341: exit status 7 (67.461894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-738341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-738341
--- PASS: TestScheduledStopUnix (112.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (245.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1214848999 start -p running-upgrade-587743 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1214848999 start -p running-upgrade-587743 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m2.4672061s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-587743 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-587743 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m58.786145887s)
helpers_test.go:175: Cleaning up "running-upgrade-587743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-587743
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-587743: (1.158338971s)
--- PASS: TestRunningBinaryUpgrade (245.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-532729 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-532729 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (84.377486ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-532729] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19758
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-532729 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-532729 --driver=kvm2  --container-runtime=crio: (1m33.741852881s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-532729 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-532729 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-532729 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.5458155s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-532729 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-532729 status -o json: exit status 2 (256.281312ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-532729","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-532729
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-532729: (1.168306927s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (55.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-532729 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-532729 --no-kubernetes --driver=kvm2  --container-runtime=crio: (55.781815284s)
--- PASS: TestNoKubernetes/serial/Start (55.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-258147 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-258147 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (104.227733ms)

                                                
                                                
-- stdout --
	* [false-258147] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19758
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 22:39:32.130656  705601 out.go:345] Setting OutFile to fd 1 ...
	I1025 22:39:32.130914  705601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:39:32.130924  705601 out.go:358] Setting ErrFile to fd 2...
	I1025 22:39:32.130928  705601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 22:39:32.131123  705601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19758-661979/.minikube/bin
	I1025 22:39:32.131695  705601 out.go:352] Setting JSON to false
	I1025 22:39:32.132656  705601 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":19316,"bootTime":1729876656,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:39:32.132766  705601 start.go:139] virtualization: kvm guest
	I1025 22:39:32.134753  705601 out.go:177] * [false-258147] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:39:32.136039  705601 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 22:39:32.136054  705601 notify.go:220] Checking for updates...
	I1025 22:39:32.138239  705601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:39:32.139380  705601 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19758-661979/kubeconfig
	I1025 22:39:32.140539  705601 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19758-661979/.minikube
	I1025 22:39:32.141645  705601 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:39:32.142823  705601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:39:32.144232  705601 config.go:182] Loaded profile config "NoKubernetes-532729": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1025 22:39:32.144323  705601 config.go:182] Loaded profile config "cert-expiration-928371": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1025 22:39:32.144419  705601 config.go:182] Loaded profile config "running-upgrade-587743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1025 22:39:32.144504  705601 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 22:39:32.180264  705601 out.go:177] * Using the kvm2 driver based on user configuration
	I1025 22:39:32.181484  705601 start.go:297] selected driver: kvm2
	I1025 22:39:32.181497  705601 start.go:901] validating driver "kvm2" against <nil>
	I1025 22:39:32.181509  705601 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:39:32.183218  705601 out.go:201] 
	W1025 22:39:32.184255  705601 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 22:39:32.185400  705601 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-258147 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-258147" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-258147

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-258147"

                                                
                                                
----------------------- debugLogs end: false-258147 [took: 2.734960028s] --------------------------------
helpers_test.go:175: Cleaning up "false-258147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-258147
--- PASS: TestNetworkPlugins/group/false (2.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-532729 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-532729 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.916532ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.563228754s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.69282627s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-532729
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-532729: (2.557737558s)
--- PASS: TestNoKubernetes/serial/Stop (2.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-532729 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-532729 --driver=kvm2  --container-runtime=crio: (23.232851142s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-532729 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-532729 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.189265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4102906469 start -p stopped-upgrade-679974 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4102906469 start -p stopped-upgrade-679974 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m10.212128346s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4102906469 -p stopped-upgrade-679974 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4102906469 -p stopped-upgrade-679974 stop: (2.158078519s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-679974 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-679974 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.13437622s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.51s)

                                                
                                    
x
+
TestPause/serial/Start (97.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-866168 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-866168 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m37.720768122s)
--- PASS: TestPause/serial/Start (97.72s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.56s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-866168 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1025 22:43:06.946929  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-866168 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.5366275s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-679974
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (54.477024514s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-866168 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-866168 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-866168 --output=json --layout=cluster: exit status 2 (241.878847ms)

                                                
                                                
-- stdout --
	{"Name":"pause-866168","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-866168","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-866168 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-866168 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-866168 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.715498004s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m6.603171294s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (102.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m42.111429138s)
--- PASS: TestNetworkPlugins/group/calico/Start (102.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-258147 "pgrep -a kubelet"
I1025 22:44:06.666011  669177 config.go:182] Loaded profile config "auto-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-258147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2rsv8" [1a9ed1b0-a3a0-49f1-acdc-e662f133f944] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2rsv8" [1a9ed1b0-a3a0-49f1-acdc-e662f133f944] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.003769856s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-258147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1025 22:44:40.903098  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m9.513614843s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2xk2v" [aa38c6e6-ea2a-4762-be99-2e4c833cc067] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005644743s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-258147 "pgrep -a kubelet"
I1025 22:44:48.890309  669177 config.go:182] Loaded profile config "kindnet-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-258147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4jzxx" [bac232e6-cd51-4921-95ef-d6e1ed9bee82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4jzxx" [bac232e6-cd51-4921-95ef-d6e1ed9bee82] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003867284s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-258147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m29.234668826s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gv4k4" [940ee240-6a40-44cd-a57e-00d99518ce84] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005153748s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-258147 "pgrep -a kubelet"
I1025 22:45:27.155969  669177 config.go:182] Loaded profile config "calico-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-258147 replace --force -f testdata/netcat-deployment.yaml
I1025 22:45:27.379709  669177 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mrz9t" [e7b29296-40a0-4325-8599-9c66bdb17b2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mrz9t" [e7b29296-40a0-4325-8599-9c66bdb17b2e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004349897s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-258147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-258147 "pgrep -a kubelet"
I1025 22:45:46.699570  669177 config.go:182] Loaded profile config "custom-flannel-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-258147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cxdmb" [1de88326-81bd-4067-b914-f4358de809b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cxdmb" [1de88326-81bd-4067-b914-f4358de809b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.004205576s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m23.823987069s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-258147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-258147 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m35.347453561s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-258147 "pgrep -a kubelet"
I1025 22:46:47.461304  669177 config.go:182] Loaded profile config "enable-default-cni-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-258147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qdpsj" [2a4b7e48-72c5-4d4b-be6a-48674fff7813] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qdpsj" [2a4b7e48-72c5-4d4b-be6a-48674fff7813] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004227812s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-258147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (90.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-601894 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-601894 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m30.191359009s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (90.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cm8cs" [305edd88-24db-44ea-b32b-7cfc89ab8d8e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004431873s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-258147 "pgrep -a kubelet"
I1025 22:47:29.165478  669177 config.go:182] Loaded profile config "flannel-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-258147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s4ltv" [6011c064-0b47-4cb7-ada8-d8f4f2732735] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s4ltv" [6011c064-0b47-4cb7-ada8-d8f4f2732735] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004081276s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-258147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-258147 "pgrep -a kubelet"
I1025 22:47:56.878628  669177 config.go:182] Loaded profile config "bridge-258147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-258147 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2srt6" [5e327151-8095-422a-a59d-64ad8a2b2491] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2srt6" [5e327151-8095-422a-a59d-64ad8a2b2491] Running
E1025 22:48:06.946511  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/addons-413632/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005551424s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-657458 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-657458 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m14.08445698s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-258147 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-258147 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E1025 22:56:47.714065  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-166447 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-166447 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m3.561562513s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-601894 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e61593bd-5c49-4508-92b3-c5cc27f82484] Pending
helpers_test.go:344: "busybox" [e61593bd-5c49-4508-92b3-c5cc27f82484] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e61593bd-5c49-4508-92b3-c5cc27f82484] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005055044s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-601894 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-601894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-601894 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.224131874s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-601894 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-601894 --alsologtostderr -v=3
E1025 22:49:06.912245  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:06.918816  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:06.930221  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:06.951821  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:06.993271  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:07.074840  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:07.236522  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:07.558171  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:08.200087  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:09.482401  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:12.044225  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-601894 --alsologtostderr -v=3: (1m31.03274819s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-657458 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [22042419-3867-4640-abf9-13a4d8fea670] Pending
helpers_test.go:344: "busybox" [22042419-3867-4640-abf9-13a4d8fea670] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1025 22:49:17.166192  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [22042419-3867-4640-abf9-13a4d8fea670] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004437306s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-657458 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-657458 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-657458 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-657458 --alsologtostderr -v=3
E1025 22:49:27.407856  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-657458 --alsologtostderr -v=3: (1m31.080624756s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-166447 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d245be25-6b8c-4005-8562-f52166c3788f] Pending
helpers_test.go:344: "busybox" [d245be25-6b8c-4005-8562-f52166c3788f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d245be25-6b8c-4005-8562-f52166c3788f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004175948s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-166447 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-166447 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-166447 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-166447 --alsologtostderr -v=3
E1025 22:49:40.902765  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/functional-889777/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:42.623734  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:42.630121  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:42.641802  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:42.663160  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:42.704628  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:42.786389  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:42.948091  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:43.269733  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:43.911436  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:45.193280  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:47.755239  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:47.890038  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:49:52.876624  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:03.118019  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:20.932759  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:20.939128  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:20.950522  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:20.971894  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:21.013310  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:21.094855  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:21.256456  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:21.578440  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:22.219974  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:23.501780  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:23.600342  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:26.063931  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-166447 --alsologtostderr -v=3: (1m31.270903819s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-601894 -n embed-certs-601894
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-601894 -n embed-certs-601894: exit status 7 (67.584333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-601894 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (325.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-601894 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1025 22:50:28.851427  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:31.186212  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:41.428139  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:47.014123  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:47.020482  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:47.031811  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:47.053209  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:47.094701  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:47.176270  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:47.337824  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:47.659771  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:48.301193  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:49.582889  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:50:52.145202  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-601894 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (5m25.238891735s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-601894 -n embed-certs-601894
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (325.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-657458 -n no-preload-657458
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-657458 -n no-preload-657458: exit status 7 (76.061646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-657458 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1025 22:50:57.267412  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Done: out/minikube-linux-amd64 addons enable dashboard -p no-preload-657458 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.462815535s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (312.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-657458 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1025 22:51:01.910287  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:04.561900  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/kindnet-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:07.509617  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-657458 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (5m12.533844248s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-657458 -n no-preload-657458
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (312.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-166447 -n default-k8s-diff-port-166447
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-166447 -n default-k8s-diff-port-166447: exit status 7 (75.479948ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-166447 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (375.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-166447 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1025 22:51:27.991885  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:42.872096  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/calico-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:47.714566  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:47.721055  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:47.732513  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:47.753957  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:47.795506  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:47.877040  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:48.039315  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:48.360611  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:49.002791  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:50.284814  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:50.773479  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/auto-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:52.847559  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:51:57.969712  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:08.211104  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:52:08.953733  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-166447 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (6m15.695172717s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-166447 -n default-k8s-diff-port-166447
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (375.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-005932 --alsologtostderr -v=3
E1025 22:53:38.074504  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/bridge-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-005932 --alsologtostderr -v=3: (1.369260115s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005932 -n old-k8s-version-005932: exit status 7 (67.497963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-005932 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mtxvz" [3b829768-18bf-405b-be17-16d030e46ae9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005595174s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mtxvz" [3b829768-18bf-405b-be17-16d030e46ae9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003804842s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-601894 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-601894 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-601894 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-601894 -n embed-certs-601894
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-601894 -n embed-certs-601894: exit status 2 (244.364155ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-601894 -n embed-certs-601894
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-601894 -n embed-certs-601894: exit status 2 (270.161875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-601894 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-601894 -n embed-certs-601894
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-601894 -n embed-certs-601894
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-357495 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-357495 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (48.640185286s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-cg5j6" [3a924531-4db7-4f3a-b663-1c8a1e44e73d] Running
E1025 22:56:14.718220  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/custom-flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004017003s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-cg5j6" [3a924531-4db7-4f3a-b663-1c8a1e44e73d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004811619s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-657458 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-657458 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-657458 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-657458 -n no-preload-657458
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-657458 -n no-preload-657458: exit status 2 (244.054959ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-657458 -n no-preload-657458
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-657458 -n no-preload-657458: exit status 2 (241.738996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-657458 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-657458 -n no-preload-657458
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-657458 -n no-preload-657458
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-357495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-357495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.08891998s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-357495 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-357495 --alsologtostderr -v=3: (10.532999781s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-357495 -n newest-cni-357495
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-357495 -n newest-cni-357495: exit status 7 (67.585091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-357495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-357495 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1025 22:57:15.418760  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/enable-default-cni-258147/client.crt: no such file or directory" logger="UnhandledError"
E1025 22:57:22.936737  669177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19758-661979/.minikube/profiles/flannel-258147/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-357495 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (37.668255565s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-357495 -n newest-cni-357495
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jnsn2" [6390a2af-f3ba-42b2-b747-81bba8bae3f2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jnsn2" [6390a2af-f3ba-42b2-b747-81bba8bae3f2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004752764s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jnsn2" [6390a2af-f3ba-42b2-b747-81bba8bae3f2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005506735s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-166447 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-166447 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-166447 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-166447 -n default-k8s-diff-port-166447
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-166447 -n default-k8s-diff-port-166447: exit status 2 (244.944174ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-166447 -n default-k8s-diff-port-166447
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-166447 -n default-k8s-diff-port-166447: exit status 2 (238.506228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-166447 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-166447 -n default-k8s-diff-port-166447
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-166447 -n default-k8s-diff-port-166447
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-357495 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-357495 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-357495 -n newest-cni-357495
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-357495 -n newest-cni-357495: exit status 2 (230.037127ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-357495 -n newest-cni-357495
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-357495 -n newest-cni-357495: exit status 2 (228.259454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-357495 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-357495 -n newest-cni-357495
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-357495 -n newest-cni-357495
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.56s)

                                                
                                    

Test skip (39/326)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
151 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 2.93
267 TestNetworkPlugins/group/cilium 3.24
280 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-413632 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-258147 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-258147" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-258147

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-258147"

                                                
                                                
----------------------- debugLogs end: kubenet-258147 [took: 2.782974479s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-258147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-258147
--- SKIP: TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-258147 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-258147" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-258147

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-258147" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-258147"

                                                
                                                
----------------------- debugLogs end: cilium-258147 [took: 3.104374142s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-258147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-258147
--- SKIP: TestNetworkPlugins/group/cilium (3.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-723378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-723378
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard